forum_id
stringlengths 9
13
| raw_ocr_text
stringlengths 4
631k
|
---|---|
6zGpfOBImD | M2T2: Multi-Task Masked Transformer forObject-centric Pick and PlaceWentao YuanUniversity of Washingtonwentaoy@cs.washington.eduAdithyavairavan Murali∗NVIDIAadmurali@nvidia.comArsalan Mousavian∗NVIDIAamousavian@nvidia.comDieter FoxUniversity of Washington, NVIDIAfox@cs.washington.eduAbstract: With the advent of large language models and large-scale roboticdatasets, there has been tremendous progress in high-level decision-making forobject manipulation [1, 2, 3, 4]. These generic models are able to interpret com-plex tasks using language commands, but they often have difficulties generalizingto out-of-distribution objects due to the inability of low-level action primitives. Incontrast, existing task-specific models [5, 6] excel in low-level manipulation ofunknown objects, but only work for a single type of action. To bridge this gap,we present M2T2, a single model that supplies different types of low-level actionsthat work robustly on arbitrary objects in cluttered scenes. M2T2 is a transformermodel which reasons about contact points and predicts valid gripper poses for dif-ferent action modes given a raw point cloud of the scene. Trained on a large-scalesynthetic dataset with 128K scenes, M2T2 achieves zero-shot sim2real transferon the real robot, outperforming the baseline system with state-of-the-art task-specific models by about 19 %in overall performance and 37.5 %in challengingscenes were the object needs to be re-oriented for collision-free placement. M2T2also achieves state-of-the-art results on a subset of language conditioned tasks inRLBench [7]. Videos of robot experiments on unseen objects in both real worldand simulation are available on our project website.Keywords: Object Manipulation, Pick-and-place, Multi-task Learning1 IntroductionThe successful completion of many complex manipulation tasks such as object rearrangement relieson robust action primitives that can handle a large variety of objects. Recently, tremendous progresshas been made in open-world object manipulation [1, 2, 3, 4] using language models for high-levelplanning. However, these methods are often restricted to scenes with a few fixed object shapes dueto the limited capability of low-level skills such as picking and placing. Meanwhile, there are task-specific models [5, 8, 9] that excel on a particular skill on a large variety of objects. This leads us tothe question: is it possible to have a single model for different action primitives that works robustlyon diverse objects?We propose Multi-TaskMasked Transformer (M2T2), a unified model for learning multiple actionprimitives. As shown in Fig. 1, given a point cloud of the scene, M2T2 predicts collision-freegripper poses for various types of actions including 6-DoF grasping and placing, eliminating theneed to use different methods for different actions. M2T2 can generate a diverse set of goal posesthat provide sufficient options for low-level motion planners. It can also generate more specific goalposes conditioned on language. Combining high-level task planners and the robust action primitivesfrom M2T2 allows the robot to solve many complex tasks like the ones in RLBench [7]. Overall,our contributions are as follows:7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: We propose M2T2, a unified model for learning multiple action primitives. M2T2 takes araw 3D point cloud and predicts 6-DoF grasps per-object (lower left) and orientation-aware place-ments (lower right, where green means the object can fit in any orientation and yellow means only asubset of orientations are possible). Colors on the point clouds are for visualization only.1. We present M2T2, a unified transformer model for grasping and placing, which outper-forms state-of-the-art methods [5, 6] in terms of success rate and output diversity.2. A large-scale synthetic dataset for training M2T2, consisting of 130K cluttered scenes with8.8K different objects, annotated with valid gripper poses for picking and placing.3. We show that M2T2 achieves zero-shot sim2real transfer for picking and placing out-of-distribution objects, outperforming baseline by about 19 %.4. We show that M2T2 outperforms state-of-the-art end-to-end method [10] on a subset ofRLBench [7], demonstrating its potential in solving complex tasks with language goals.2 Related WorkMulti-Task Learning in Robotics With the advent of robotic datasets with diverse tasks [7, 11],many recent works have shown that learning multiple manipulation tasks with a single model canimprove sample-efficiency and performance. Some works learn a common representation for mul-tiple tasks [12, 13], while other works [2, 10, 14] train end-to-end language-conditioned policiesvia imitation learning. However, these end-to-end agents have a hard time generalizing to out-of-distribution tasks and objects. In contrast, we take a more modular approach. M2T2 supplies actionprimitives like “placing the mug” that work on unseen objects in the real world. By interfacingwith other task planning modules [3, 4], M2T2 can be part of a robust and flexible open-worldmanipulation system.Object Grasping Grasping is the most fundamental skill of a robot manipulator. Recently, 6-DoF,learning-based grasp pose generator is becoming mainstream. These methods typically take a 3Dpoint cloud [5, 8, 15, 16] or voxelization of the scene [17, 18] and predict 6-DoF gripper poses tostably grasp an object. There are also task-oriented grasp generators [19, 20] that predicts grasps fordownstream tasks (e.g. handover). However, these works are geared toward a single skill, grasping.While grasping is an important skill, it cannot solve all the tasks in manipulation. M2T2 aim to builda common formulation for different kinds of skills, including grasping.2Figure 2: M2T2 generates valid gripper poses for grasping and placing with a single model. First,a 3D network (scene encoder) takes the scene point cloud and produces multi-scale feature maps.Then, the features are cross-attended with learnable query tokens via a transformer (contact de-coder). Finally, the output tokens are multiplied with per-point features and generate contact masksand gripper poses for each object (for grasping) and each orientation (for placing). For grasping,addition MLPs are applied to the output tokens and per-point features to predict objectness scores(to filter out non-object proposals) and grasp parameters (to reconstruct gripper poses). Optionally,the contact decoder can take a set of tokens encoding language goals to produce goal-conditionedgrasping and placing poses.Object Placement Placement is another important action mode for a robot manipulator. Com-pared to grasping methods, placement is less studied until recently. [6] uses rejection sampling andlearning-based collision detector [21] to find all possible placement positions. [22] generates place-ment configurations for a set of objects based on language commands, but it does not consider thegripper. M2T2 predicts placement poses without the need for sampling, uses the same model forgrasping and considers both the position and orientation of the object and the gripper.3 Technical ApproachM2T2 predicts target gripper poses for multiple action primitives. Here, we consider two mostfundamental action modes of a robot manipulator, picking and placing.Object-centric 6-DoF Grasping: The input is a single 3D point cloud of the scene which can beextracted from commodity depth sensors. The output is set of object grasp proposals. Each objectgrasp proposal is a set of 6-DoF grasp poses (3-DoF rotation + 3-DoF translation), indicating wherethe end effector of a robot arm needs to be in order to pick up the object successfully.Orientation-aware Placing: The input is a 3D point cloud of the scene plus a partial 3D point cloudof the object to be placed. The output is a set of 6-DoF placement poses, indicating where the endeffector needs to be so that when the object is released, it will be placed stably without collision.Note that M2T2 generates not only the location, but also the orientation of the object to be placed.This ensures that the object can be re-oriented before placed to fit into cluttered spaces.The key idea of M2T2 is to reason about contact points. We view picking as the robot makingcontact with a target object using an empty gripper and placing as the robot using an object in itsgripper to make contact with a surface.33.1 ArchitectureScene Encoder: The scene encoder encodes a 3D point cloud of the scene into multi-scale featuremaps that serve as context for the contact decoder. Specifically, it produces four feature maps thatare1/64,1/16,1/4,1times the input size respectively. Each feature vector in the feature maps isgrounded to a point in the input point cloud. We adapt a PointNet++ [23] designed for semanticsegmentation as the scene encoder, but in principle, any network that produces multi-resolutionfeature maps from 3D point clouds can serve as the scene encoder.Contact Decoder: The contact decoder is a transformer that predicts where to make contact forboth grasping and placing. We used the grasp representation of [5] for grasping where each grasp isanchored around the visible point on the object that makes contact with the gripper during graspingand the model predicts additional parameters specifying the relative transform of grasp with respectto the contact point. We extend this representation for placing by defining contact point as thelocation where the center of object point cloud projects on the table.As a result, we can borrow the latest insight from image segmentation. In our case, we modify themasked transformer [24] to predict contact masks. The transformer takes a set of learnable querytokens through multiple attention layers. Feature maps of multiple resolutions from scene encoderare passed in via cross-attention at different layers. The output tokens of each layer are multipliedwith the per-point feature map from scene encoder to generate interim masks. The interim masks areused to mask the cross attention in the next layer to guide the attention into relevant regions (thus thename “masked transformer”). After the last attention layer, the model produces Ggrasping masksandPplacing masks, where Gis the maximum number of graspable objects and Pis the numberof placement orientations.Objectness MLP: An MLP takes the grasp tokens and produces an objectness score for eachtoken. This is to filter out the non-object tokens since the number of graspable objects in the scenecan vary (see Sec. 3.2 and Sec. 3.3 for how the score is used in training and inference).Object Encoder: The object encoder is a PointNet++ [23] which encodes a 3D point cloud of theobject to be placed to a single feature vector that is added to the place query tokens.Action Decoder: The action decoder is a 3-layer MLP that takes the per-point feature map fromscene encoder and predicts a 3D approach direction, a 3D contact direction and a 1D grasp width foreach point, which are used to reconstruct grasp poses together with the contact points (see Sec. 3.3).3.2 Training ObjectiveGrasping: The grasping objective consists of three terms: objectness loss Lobj, mask loss Lmaskand ADD-S loss LADD−S.Because the number of objects Nin the scene is unknown, we set G, the number of grasp tokens,to a large number (see Sec. 4.2 for ablations). M2T2 outputs Gscalar objectness scores oiandGper-point masks Mgraspi . We use Hungarian matching to select Nmasks that best match with theground truth. First, we compute the following cost for each prediction (oi, Mgraspi)and ground truthmask MgtjCij= 1−oi+ BCE( Mpredi, Mgtj) + DICE( Mpredi, Mgtj) (1)where BCE is binary cross entropy and DICE is the DICE loss [25]. Now, we apply Hungarianmatching to the G×Ncost matrix Cto obtain the set of indices M={mi}that minimizes thetotal costPNj=1Cmjj. Then, we compute the objectness loss by labeling all matched tokens aspositive and others as negativeLobj=1GGXi=1−[1(i∈ M ) log(oi) + (1 −1(i∈ M )) log(1 −oi)] (2)4We compute the mask loss between the matched masks and the ground truth asLmask =1NNXj=1BCE( Mpredmj, Mgtj) + DICE( Mpredmj, Mgtj) (3)In practice, we find that computing the BCE only for the points with top klargest loss improvesperformance, where k= 512 for grasping and k= 1024 for placing. This is likely due to the largeclass imbalance (over 90% of the points are not contact points). See Sec. 4.2 for ablations.The ADD-S loss is introduced by [5] and is critical for good grasp confidence estimation (seeSec. 4.2 for ablations). To compute it, we first need to define 5 key points {vk}on the gripper.Then, for each pair of predicted grasp and ground truth grasp, we compute the total distance be-tween transformed key pointsdij=5Xk=1∥(Rpredivk+tpredi)−(Rgtjvk+tgtj)∥ (4)Next, we find the closest ground truth to each prediction ni= argminjdijand compute ADD-S asLADD−S=1|Cpred|Xi∈Cpredsidini (5)where Cpredis the set of contact points of predicted grasps and siis the grasp confidence, a scalarbetween 0 and 1 taken from the contact masks before thresholding. Note that since the loss isweighted by si, predicted grasps that are far away from any ground truth grasp will get a largerpenalty on confidence, which improves contact point estimation.Placing: The placing objective is defined as a combination of BCE and DICE [25] between thepredicted and ground truth placement masksLplacing =1PPXi=1BCE( Mpredi, Mgti) + DICE( Mpredi, Mgti) (6)This is the only loss for placing since no other learnable quantities are needed to reconstruct theplacement poses (see 3.3).3.3 Model InferenceGrasp Pose Prediction: During inference, we first select the contact masks whose objectness scoreis above 0.5. Then, for each point pwithin the contact mask, we take the corresponding graspparameters from the action decoder to reconstruct a 6-DoF grasp pose (Rgrasp,tgrasp)∈SE(3) astgrasp =p+w2c+da (7)Rgrasp ="| | |c c×a a| | |#(8)where cis the unit 3D contact direction, ais the unit 3D approach direction, wis the 1D graspwidth and dis the constant distance from the gripper base to the grasp baseline (the line betweentwo fingers). We refer readers to Contact-Grasp-Net paper for more details on the formulation [5].Placement Pose Prediction: ThePplacement contact masks represent valid placement locationswhen the object in the gripper is rotated by Pdiscrete planar rotations Rplanar . To recover theplacement poses, we first compute the bottom center bof the object point cloud which is used as thereference point for contact. Then, we use forward kinematics to obtain the gripper’s current pose(Ree,tee). The 6-DoF placement pose (Rplace,tplace)can be computed astgrasp =p+Rplanar (tee−b) (9)Rgrasp =RplanarRee (10)5(a) Grasping Seen (b) Grasping Unseen (c) Placing Seen (d) Placing UnseenFigure 3: M2T2 outperforms task-specific models – Contact-GraspNet [5] for grasping and CabiNet[6] for placing – on objects from seen categories (a,c) and unseen categories (b,d).(a) ADD-S weight (b) Number of tokens (c) Discrete rotation (d) Hard negative miningFigure 4: Ablation studies3.4 Synthetic Data GenerationWe build a synthetic dataset with 130K cluttered scenes for training and evaluating 6-DoF pickingand placing methods. There are 64K training scenes and 1K test scenes for picking and placingeach. There are 1 to 15 objects per scene scattered on a randomly sized table mounted with a FrankaEmika robot arm. The objects are sampled from the ACRONYM [26] dataset, which contains 8.8Kobject models, each labeled with 2K grasps. The objects are from 252 different categories, 12 ofwhich are excluded from training. Half of the test scenes contain only objects from the 12 unseencategories. For each scene, we render a 512×512depth image from a random viewpoint above thetable to generate the scene point cloud. We include example images of the dataset in the appendix.4 Experimental Evaluation4.1 Evaluation in SimulationEvaluation Metric: We use the precision-coverage curve as the metric for our evaluation in sim-ulation. To plot this curve, we start with a confidence threshold of 1 and add grasps/placementsto the set of predictions by incrementally lowering the confidence threshold until 0.5. In the meantime, we keep track of two numbers: precision, the percentage of successful grasps/placements, andcoverage, the percentage of ground truth grasps/placements that are within 5 cm of any predictedpose. Finally, we plot the coverage on the x-axis and precision on the y-axis. In practice, we foundthat this curve is a good indicator of a model’s performance in the real world.A grasp is considered successful if the gripper does not collide with the scene (including occludedparts) and is stable. We evaluate the stableness of a grasp by shaking the grasped object for 5seconds in the Isaac gym simulator [27] with a physx physics engine, identical to the evaluationin ACRONYM [26]. A placement is considered successful if both the gripper and the object arecollision-free and the bottom of the object is less than 5 cm from the correct placement surface.M2T2 vs. Specialized Baseline Models: We compare M2T2 against two state-of-the-art spe-cialized models: Contact-GraspNet [5] for grasping and CabiNet [6] for placing. The results aresummarized in Fig. 3. M2T2 outperforms both models by a significant margin, especially on theplacement. This shows the advantage of orientation reasoning for placement. In many cases, it isnot possible to find a good placement pose without rotating the object in hand.6Table 1: Real Robot Experiments Success RatesModel Pick Place Place re-orient OverallM2T2 (ours) 85.7% 72.2% 62.5% 61.9%Contact-GraspNet [5] + CabiNet [6] 76.2% 56.2% 25.0% 42.9%Figure 5: Our robot experimental setup. Left: Scenes where the target object (highlighted in red)needs to be reoriented to be placed in the placement region (shown in green). Right: A example ofa scene where objects are sequentially moved from the right to the initially empty region on the left.4.2 AblationsChoice of ADD-S: We find that setting a larger weight for the ADD-S loss has a critical impact onthe grasping performance. As shown in Fig. 4a, setting lower ADD-S increases grasp coverage atthe expense of precision which is not desirable.Number of Grasp Queries: We experimented with different number of grasp query tokens and find100 to be an appropriate number, as shown by the results in Fig. 4b.Discrete vs. Continuous Rotation: We compared our model with a variant where there is only asingle placement mask and the placement rotations are regressed just like the grasp parameters. Asshown by Fig. 4c, it is better to have a set of placement masks corresponding to discrete rotations.Since multiple orientations of the object can be valid for a given placement location, regressing to asingle rotation does not model the multi-modality of placement orientations.Importance of Hard Negative Mining: As mentioned in Sec. 3.2, we use hard negative mining (byapplying the mask loss for 1024 points with the largest loss. Without this trick, the quality of mostconfident placements become significantly worse (see Fig. 4d).4.3 Real Robot ExperimentsHardware Setup: We evaluated M2T2 on a tabletop manipulation setting with a 7-DOF FrankaPanda Robot and a parallel jaw gripper. For perception, we used a single Intel Realsense L515 RGB-D camera overlooking the scene. We use the motion planner from [6] for reactive path planning viamodel predictive control. The picking target and placement region are specified by the user with3 clicks on the camera image. All inference is run on a single NVIDIA Titan RTX GPU, whichtakes about 0.1 second per frame. After obtaining a set of collision-free gripper poses from M2T2,the robot execute the one that is closest to the current robot configuration in joint space and has afeasible inverse kinematics solution for the robot arm.7Table 2: Success Rate Comparison on RLBenchTask open drawer turn tap meat off grillM2T2 (ours) 89.3±1.8% 88.0±5.6% 86.7±1.8%PerAct [10] 80.0% 84.0% 84.0%Results: Table 1 shows the success rate of 21 pick-and-place sequences in 7 different scenes. Eachscene contains more than 5 objects and all objects are unseen during training. We do not provide3D models for any of the object. The placing success rate is conditioned on picking success, i.e.overall success = picking success ×placing success. We designed four scenes where the object hasto be re-oriented before placing to fit in the target region, as shown on the left in Fig 5. We can seethat M2T2 significantly outperforms the baseline system, which is a combination of state-of-the-arttask-specific models. Notably, M2T2 is 9.5 %higher for grasping than [5] and 37.5 %higher than[6] for the more challenging re-orientation placement. 2/3 pick failures for M2T2 were for cupobjects, which were out-of-distribution during training. 4/5 pick failures for the baseline [5] wasbecause the model did not generate grasps at all or the generated ones were not reachable basedon Inverse Kinematics (IK). On the other hand, M2T2 generates grasps with higher coverage andhence have a greater chance of being within the robot’s kinematic workspace. We only use 8 bins forthe orientation discretization, and further increasing the discretization granularity could potentiallyreduce our placement error. The real robot executions can be found on the project website.4.4 Evaluataion on RLBenchRLBench [7] is a commonly used benchmark to evaluate multi-task robot manipulation methods. Wefound that many complex tasks in RLBench can be decomposed into a sequence of primitive actionsand solved by M2T2. We demonstrate this by training and evaluating our model on 3 RLBenchtasks: open drawer, turn tap and meat off grill. M2T2 is able to outperform PerAct [10], a state-of-the-art multi-task model, on all 3 tasks. The results are summarized in Table 2. We report averagesuccess rate over 25 random seeds. The standard deviation is computed with 3 repeated trials foreach seed. PerAct’s results are taken from the original paper. This demonstrates M2T2’s abilityto learn action primitives other than generic pick and place and to incorporate multi-modal inputsincluding language. More details of the experiments are in the appendix.5 ConclusionIn this paper we present Multi-Task Masked Transformer (M2T2), an object-centric transformermodel for pick-and-place of unknown objects in clutter. We train M2T2 on a large-scale syntheticdataset of 130K scenes and deploy it on a real robot without any real-world training data. M2T2outperforms state-of-the-art specialized models in 6-DoF grasping [5] and placing [6] by about 19 %in overall success rate in the real world. M2T2 is especially proficient in re-orienting objects forprecise collision-free placement. In future, we plan to integrate M2T2 with language-conditionedtask planners [3, 4] to build an open-world manipulation system that works on everyday scenes without-of-distribution objects.Limitations: M2T2’s performance is bounded by the visibility of contact points. For example, itcannot predict grasps on the side of the box opposite to the camera. M2T2 is also not able to directlypredict actions without contact points, such as lifting. During placing, M2T2 needs segmentationof the object in gripper in order to estimate how far the gripper needs to be from the contact pointbetween the object and placement surface. The grasp predictions for each token can spread acrossmultiple objects in close contact. Currently, M2T2 is trained and evaluated only on tabletop scenes,but this could be improved by training with more diverse procedurally generated synthetic data suchas in [6, 28].8AcknowledgmentsThis work is supported by NSF Award IIS-2024057. The project title is Collaborative Research:NRI: FND: Graph Neural Networks for Multi-Object Manipulation.References[1] A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang,R. Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. InConference on Robot Learning , pages 287–318. PMLR, 2023.[2] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprintarXiv:2210.03094 , 2022.[3] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 ,2022.[4] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models. arXivpreprint arXiv:2209.11302 , 2022.[5] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 13438–13444. IEEE, 2021.[6] A. Murali, A. Mousavian, C. Eppner, A. Fishman, and D. Fox. Cabinet: Scaling neural col-lision detection for object rearrangement with procedural scene generation. arXiv preprintarXiv:2304.09302 , 2023.[7] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[8] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox. 6-dof grasping for target-drivenobject manipulation in clutter. In 2020 IEEE International Conference on Robotics and Au-tomation (ICRA) , pages 6232–6238. IEEE, 2020.[9] S. Song, A. Zeng, J. Lee, and T. Funkhouser. Grasping in the wild: Learning 6dof closed-loopgrasping from low-cost demonstrations. Robotics and Automation Letters , 2020.[10] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. In Conference on Robot Learning , pages 785–799. PMLR, 2023.[11] J. Duan, Y . R. Wang, M. Shridhar, D. Fox, and R. Krishna. Ar2-d2: Training a robot without arobot. arXiv preprint arXiv:2306.13818 , 2023.[12] L. Pinto and A. Gupta. Learning to push by grasping: Using multiple tasks for effectivelearning. In IEEE International Conference on Robotics and Automation (ICRA) , pages 2161–2168. IEEE, 2017.[13] W. Yuan, C. Paxton, K. Desingh, and D. Fox. Sornet: Spatial object-centric representations forsequential manipulation. In Conference on Robot Learning , pages 148–157. PMLR, 2022.[14] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.9[15] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for objectmanipulation. In Proceedings of the IEEE/CVF International Conference on Computer Vision ,pages 2901–2910, 2019.[16] H.-S. Fang, C. Wang, M. Gou, and C. Lu. Graspnet-1billion: A large-scale benchmark forgeneral object grasping. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 11444–11453, 2020.[17] M. Breyer, J. J. Chung, L. Ott, S. Roland, and N. Juan. V olumetric grasping network: Real-time6 dof grasp detection in clutter. In Conference on Robot Learning , 2020.[18] Z. Jiang, Y . Zhu, M. Svetlik, K. Fang, and Y . Zhu. Synergies between affordance and geometry:6-dof grasp detection via implicit representations. Robotics: science and systems , 2021.[19] K. Fang, Y . Zhu, A. Garg, A. Kurenkov, V . Mehta, L. Fei-Fei, and S. Savarese. Learning task-oriented grasping for tool manipulation from simulated self-supervision. The InternationalJournal of Robotics Research , 39(2-3):202–216, 2020.[20] A. Murali, W. Liu, K. Marino, S. Chernova, and A. Gupta. Same object, different grasps: Dataand semantic knowledge for task-oriented grasping. In Conference on robot learning , pages1540–1557. PMLR, 2021.[21] M. Danielczuk, A. Mousavian, C. Eppner, and D. Fox. Object rearrangement using learned im-plicit collision functions. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 6010–6017. IEEE, 2021.[22] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[23] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Advances in neural information processing systems , 30, 2017.[24] B. Cheng, A. Choudhuri, I. Misra, A. Kirillov, R. Girdhar, and A. G. Schwing. Mask2formerfor video instance segmentation. arXiv preprint arXiv:2112.10764 , 2021.[25] F. Milletari, N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks forvolumetric medical image segmentation. In 2016 fourth international conference on 3D vision(3DV) , pages 565–571. Ieee, 2016.[26] C. Eppner, A. Mousavian, and D. Fox. Acronym: A large-scale grasp dataset based on sim-ulation. In 2021 IEEE International Conference on Robotics and Automation (ICRA) , pages6222–6227. IEEE, 2021.[27] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac gym: High performance gpu-based physics simulation forrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[28] A. Fishman, A. Murali, C. Eppner, B. Peele, B. Boots, and D. Fox. Motion policy networks.InConference on Robot Learning , pages 967–977. PMLR, 2023.[29] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[30] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.10AppendixA Further Architectural DetailsScene Encoder The scene encoder is a PointNet++ [23] with 4 multi-resolution set abstractionlayers as the encoder and 4 feature propagation layers as the decoder. The input point cloud is sub-sampled to 16384 points. Each set abstraction layer will select N/4seed points using furthest pointsampling where Nis the size of the input pointwise feature map. Then, local features are computedaround each seed point and propagated with an MLP. As a result, the scene encoder produces 4feature maps of decreasing resolution, with 16384, 4096, 1024 and 256 points respectively. We usethe first per-point feature map for prediction and the remaining 3 for cross-attention.Contact Decoder The contact decoder takes G= 100 grasp tokens and P= 64 placement tokensas input. These input query tokens are randomly initialized and learned during training. The G+Pquery tokens are fed into a transformer network with 3 blocks. Each block consists of a cross-attention layer, a self-attention layer and a feedforward MLP layer. In the cross-attention layer, thequery tokens are cross attended with one of the feature maps produced by the scene encoder toincorporate scene context. In the self-attention layer, the query tokens are attended with themselvesto propagate information among different queries. The width (i.e. dimension of each token) is set to256. The input tokens of each transformer block (including the initial one) are also used to produceintermediate mask predictions. Specifically, the tokens are multiplied with the per-point feature mapfrom the scene encoder, passed through sigmoid and thresholded to generate a per-point mask foreach query. These intermediate masks are not only supervised by the ground truth masks duringtraining, but also subsampled and used as attention masks for the cross-attention layer. This forcesthe network to focus on relevant regions in the scene.While the contact decoder is inspired by [24], it is specially designed to handle 3D inputs insteadof images. For example, since the context features are grounded to 3D points, we compute positionencodings from their 3D locations during cross-attention.Modifications for RLBench In RLBench, we break down tasks like “put meat off grill” intopredicting a single grasp or placement pose conditioned on the language goal. To make the outputconditioned on language, as shown in Fig. A, we can introduce additional language tokens as querytokens, where the language tokens come from the language goal embedded by a frozen CLIP [29]encoder. Following PerAct [10], we trained M2T2 on 100 demos and evaluate on 25 demos withrandom seeds different from training.Figure A: Network for language-conditioned pick-and-place in RLBench. Compared to the networkfor generic pick-and-place, there are only 1 grasp and 1 place query token. The predicted grasp andplacement are conditioned on language commands encoded by a frozen CLIP.11B Comparison Against Single-task ModelWe have trained our model to only perform a single task. These task-specialized models are worsethan our multi-task model (see Fig. Ba, Bb). This shows the importance to formulate both pickingand placing under the same framework.(a) Against pick-only (b) Against place-onlyFigure B: Multi-task vs Single-task ModelsC TrainingM2T2 is trained using the Adam-W [30] optimizer with a fixed learning rate of 0.0008 on 8 V100GPUs for 160 epochs. The batch size is 16 on each GPU. The training takes about 2 days to finish.D Data GenerationWe procedurally generated a large-scale synthetic dataset for training M2T2, as shown in Fig C. Ineach scene, we randomly place 1-15 objects from the ACRONYM dataset [26] on the table. Eachobject in ACRONYM are labeled with 2000 grasps. We transform these grasps by the object poseand filter out colliding ones. The camera pose is randomized around the entire hemisphere abovethe table, making M2T2 very robust to viewpoint changes.Figure C: Examples for our large-scale synthetic dataset, for the grasping (top) and placing (bottom)tasks respectively. Objects are randomly sampled from ACRONYM [26]. Each scene can containup to 15 objects, which creates many very cluttered scenes. We also include robot in the observationto simulate realistic occlusion by the robot arm.12 |
xQx1O7WXSA | Expansive Latent Planning for Sparse RewardOffline Reinforcement LearningRobert GieselmannKTH Royal Institute of TechnologySwedenrobgie@kth.seFlorian T. PokornyKTH Royal Institute of TechnologySwedenfpokorny@kth.seAbstract: Sampling-based motion planning algorithms excel at searching globalsolution paths in geometrically complex settings. However, classical approaches,such as RRT, are difficult to scale beyond low-dimensional search spaces and relyon privileged knowledge e.g. about collision detection and underlying state dis-tances. In this work, we take a step towards the integration of sampling-basedplanning into the reinforcement learning framework to solve sparse-reward con-trol tasks from high-dimensional inputs. Our method, called VELAP , determinessequences of waypoints through sampling-based exploration in a learned state em-bedding. Unlike other sampling-based techniques, we iteratively expand a tree-based memory of visited latent areas, which is leveraged to explore a larger por-tion of the latent space for a given number of search iterations. We demonstratestate-of-the-art results in learning control from offline data in the context of vision-based manipulation under sparse reward feedback. Our method extends the set ofavailable planning tools in model-based reinforcement learning by adding a latentplanner that searches globally for feasible paths instead of being bound to a fixedprediction horizon.Keywords: Reinforcement Learning, Planning, Robot Manipulation1 IntroductionThe acquisition of complex motor skills from raw sensory observations presents one of the maingoals of robot learning. Reinforcement learning (RL) [1] provides a generic framework to obtainsuch decision-making policies through the interaction with an environment. Model-based RL [2]has recently gained much attention due to benefits in terms of sample-efficiency and robustness inlong-horizon scenarios. To address the issue of short-sighted decisions, model-based agents areoften equipped with planning methods. However, effective planning with high-dimensional inputs,such as video data, is often challenging due to the increased complexity of the search space and thedifficulty in generating accurate long-term predictions. Consequently, a growing body of researchhas explored the utilization of representation learning to simplify the decision-making problem bymapping it to an abstract and lower-dimensional latent state space [3, 4, 5, 6, 7].The model-based reinforcement learning (RL) literature has investigated various planning methodsin latent spaces, encompassing zero-order shooting-based approaches such as the Cross-EntropyMethod (CEM) [8, 3] and Model-Predictive Path Integral (MPPI) [9, 10, 11, 7], first-order gradient-based optimization [12, 4], and more recently, trajectory collocation using second-order solvers[6]. Despite this methodological diversity, the majority of existing tools primarily facilitate localoptimization within a fixed prediction horizon. Even with guidance from value heuristics, such asthe one proposed in [7], local minima may still impede progress, particularly when estimating theoptimal value function is difficult due to sparse reward feedback or limited training data. This paperargues that planning in latent state spaces can benefit from more global exploration strategies thatseek solutions beyond a fixed prediction horizon to avoid convergence to local minima.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.ztstФlatentdata supportencoderFigure 1: Our method grows a search tree in the la-tent space to globally explore reward-maximizing paths(blue:start, red:goal nodes, green: estimated values).The limitations observed in existing methodsraise the need for more sophisticated planningstrategies that can seamlessly integrate withlearned state and dynamics models. Sampling-based motion planning [13], provides a diverserange of algorithms for finding global paths be-tween states in continuous and geometrically-complex environments. Recent works by [5, 14,15] have proposed modifications of sampling-based planners in latent spaces. However, theseapproaches either rely on expert data or are notdirectly applicable to the reward-based learningsetting. This paper explores the integration of sampling-based planning techniques in learned latentspaces, providing new avenues for model-based reinforcement learning. Specifically, we focus onthe challenging scenario of offline RL [16], which is characterized by the amplified effects of valueapproximation errors [17]. Further, it allows us to better study the performance of planning in iso-lation by disentangling training and data collection. We introduce Value-guided Expansive LatentPlanning ( VELAP ), which combines a sampling-based planning module with a suitable state embed-ding. Similar to [15], our search tree serves as a state memory and it used to guide the explorationtowards undiscovered areas within the data support. Moreover, we leverage value heuristics ob-tained through temporal difference learning to accelerate the discovery of high-valued states. Wepresent a comprehensive benchmark evaluation focusing on vision-based control. For this purpose,we adapt the robot manipulator control environments from the meta-world benchmark suite [18].Our experiments reveal that VELAP surpasses existing approaches by a significant margin in termsof episode success rate. We attribute this performance gain to its ability to overcome local valueoptima through global exploration, in contrast to the prevalent approach of optimizing over a fixedhorizons.2 Related WorkLearning to control from visual input is becoming increasingly popular due to the wide availabilityof inexpensive sensors and the generic representational format of images. A considerable numberof work uses deep generative models [19] to generate future images and plan actions via model pre-dictive control [20, 21, 22, 23, 24]. These methods allow visual inspection of paths by humans, butthis is accompanied by the difficulty of generating and evaluating high-resolution video sequencesof many future steps. In this work, we instead follow the latent planning paradigm, which bypassesthe need to synthesize high-dimensional samples and enables farsighted search with lower compu-tational costs. Several works learn image distance metrics using unsupervised learning [25, 5] orRL [26, 27] to build an environment map that can be searched to generate visual paths. Map-basedapproaches have shown impressive results for navigation in static environments, but often do notscale well to complex settings with object interaction.The work in [3] presents a model-based RL agent which uses a recurrent state space model to mapimage observations to a lower-dimensional latent space. Time-discrete state-action trajectories areoptimized using the Cross-Entropy Method (CEM) [8] with a learned latent dynamics model. Ina follow-up work, [4] uses a differentiable latent planner to efficiently learn behaviors by propa-gating analytic value gradients back in time. Similarly, [12, 28] implement a differentiable plan-ner that optimizes latent trajectories via gradient descent. [29] trains a goal-conditioned RL agentwhich generates future subgoal states through trajectory optimization within the latent space of aVariational Autoencoder (V AE) [30]. [6] introduces the concept of collocation to model-based RLfor visual robot manipulation and optimizes state-action sequences directly with a second-orderoptimizer. [10] solve dexterous manipulation tasks through latent trajectory optimization using areward-weighted adaptation of MPPI [9]. Similarly, the method in [7] uses value estimates as costwithin an MPPI-based latent policy. The line of work in [31, 32, 33] combines model-based RL2with Monte Carlo Tree Search (MCTS) [34] for long-horizon decision-making in vision-based taskssuch as playing Atari games. While MCTS is mainly designed for discrete settings, [35] presents anadaptation for continuous action spaces where the search and the policy improvement are based onaction sampling.To address the offline RL setting, [36] modifies the MPPI agent in [10]. Their approach ensuresadherence to the data support by sampling actions from an imitation learning policy, thus addressingthe issue of out-of-distribution actions [17]. Similar, [11] presents a model-based agent for onlineand offline RL which uses MPPI to maximize the expected return of imagined trajectories whilebeing guided by a learned policy. [37] designed a hierarchical agent for sparse reward manipulationtasks. A V AE-based manager policy predicts subgoal states which a worker policy must achievewithin a fixed contingent of steps. In [38], a RRT-like [39] latent planner is presented for planningfrom high-dimensional data. Compared to ours, their method relies on collision checking data andis not designed for the more general reward-based setting. [14] leverages play data obtained froma human operator to train a task-conditioned policy which guides a tree search in a learned latentspace. Our method is most related to the one recently presented in [15]. The authors introducea sampling-based latent planner similar to the classical Expansive Space Trees (EST) algorithm[40]. Nevertheless, notable differences arise in terms of the problem types we tackle, resulting inthe adoption of distinct sets of tools. While their approach is confined to goal-reaching navigationtasks, our method accommodates more general task specifications by leveraging sparse rewards. Toaccomplish this, we employ value-based reinforcement learning to (a) jointly optimize representa-tions for planning alongside the control policy, (b) integrate learned heuristics for node and actionselection during planning, and (c) identify suitable goals based on value estimates. These advance-ments significantly enhance the capabilities and versatility of our method, surpassing the scope ofthe previous work.3 PreliminariesMDPs and Offline RL A Markov decision process (MDP) is defined by a tuple M=(S,A,P, r, γ ), where SandAare state and action spaces, P(s′|s, a)are state dynamics, r(s, a)is a scalar reward function, and γis a discount factor. The goal of reinforcement learning [1] (RL) isto find a policy π(a|s)that maximizes the expected discounted future reward R[τ]over all trajecto-riesτgiven an initial state distribution p0and induced by π, i.e., to optimize Eπ[R[τ]]. The problemof offline RL [16] arises when training from a fixed dataset Dconsisting of trajectories generated bya behavior policy πβ. Due to the limited coverage of Dacross the state-action space, effectively ad-dressing the adverse consequences of poor approximations outside the data support becomes crucialin the development of offline RL methods [17].Hindsight data relabeling Relabeling data has emerged as a popular technique in goal-conditioned off-policy RL [41, 42, 43, 44, 45] for the purpose of enhancing training efficiency. Theunderlying idea behind hindsight relabeling is to transform unsuccessful trajectories into success-ful ones by retrospectively modifying their goals [41]. This approach extends to offline trajectorydatasets, where relabeling can be used to synthesize experiences for learning state-reaching behav-ior [46, 47]. Failed transitions are relabeled by designating the subsequent state as the desired goaland adjusting the corresponding reward accordingly. A connection between hindsight relabeling andcontrastive learning was recently discussed in [48].Sampling-based motion planning Sampling-based motion planners [13] compute feasible pathsconnecting two points in a robot configuration space. At the core, these methods explore and con-struct a graphical representation of the continuous search space. The rapidly-exploring random tree(RRT) [39] is a widely used single-query planner, particularly suitable for scenarios with varyingenvironments. It incrementally expands a tree structure by alternately sampling collision-free statesfrom the robot’s configuration space and attempting to connect these to the nearest neighbor in thetree. Once a node reaches the vicinity of the target, a possible solution path is given by backtracing3to the root of the tree. Instead of sampling states from the configuration space, the expansive spacetrees (EST) planner [40] generates new states by expanding existing tree nodes using randomly-sampled actions.4 Value-guided Expansive Latent TreesIn this section, we detail the elements that comprise VELAP , our proposed offline RL planning agent.Problem definition We are interested in solving sparse reward continuous control tasks from high-dimensional inputs. For this purpose, we choose the example of visual control for a state space Sandaction space A=Rdaction.S=RW×H×C×Ndescribes sequential image data where W is the imagewidth, H the height, C the channel dimension and N the number of frames. Note that we employthe MDP formulation, hence assume that states s∈ S are informative to predict the distribution offuture states. A sparse binary reward r:S ×A→{ 0,1}is designed which provides a positive signalonly when reaching the final goal for which we terminate the episode. For training, we use an offlinedataset Dconsisting of recorded transitions obtained from a sub-optimal policy.State encoder: φ:S → ZDynamics: h:Z × A → Z (1)Action model: g:Z ×Rm→ ALocal policy: πl:Z×Z→A Ql:Z×Z×A→ RGlobal policy: πg:Z→A Qg:Z×A→ RComponents To tackle the specifiedproblem, we propose a model-basedRL agent that incorporates a tree-basedsearch, inspired by ESTs [40], within alearned representation space. Our prefer-ence for EST over an RRT-based approach[39] is driven by the reasoning that the ex-pansion step in ESTs eliminates the need for a global state sampler. It should be noted thatlearning such generative models can be challenging, as they require high-fidelity predictions toprevent negative assessment of out-of-distribution samples. Our approach involves several keycomponents outlined in Eq. 1. The encoder φmaps input states to latent encodings, while thedynamics model hpredicts future latent states based on actions, serving as a tool for expand-ing the search tree during planning. A local policy πlis trained to navigate between neighbor-ing states in the tree. The global policy πgdetermines optimal actions with respect to our taskgoal. During planning, we will use Qlto derive a distance proxy between states and Qgto es-timate the remaining number of steps to the goal. Among various actor-critic offline RL meth-ods available, we select TD3-BC [49] due to its robustness and ease of implementation. To im-prove the predictions of Qland measure value uncertainty, we employ an ensemble of nensQ-heads{Ql1, ..Qlnens}similar to [50] (see App. B). For the following, we use kto denote the k-th ensemble member and define Qi,jmin:= min {Qlk(zi, zj, πl(zi, zj))}nensk=1as the minimum andQi,jstd:=std({Qlk(zi, zj, πl(zi, zj))}nensk=1) as the standard deviation of the ensemble predictions be-tween two states ziandzjwith respect to πl. Finally, a conditional generative model, representingour action model g, enables sampling actions from the state-conditioned action distribution. We useRmto denote the input noise used during the generation process.Alignment of representation and planner To achieve long-horizon planning and control in Z,we seek a state representation which favors accurate learning of dynamics in order to generate validfuture waypoint states over many time steps. Secondly, the state encoding should facilitate theoptimization of our value functions and control policies. Existing model-based RL approaches oftenrely on surrogate metrics for model learning, such as mean-squared prediction error or pixel-wisereconstruction. These metrics do not ensure alignment with actual control performance, leading toa mismatch between the environment model and the planner [51], which can adversely affect thecontroller’s performance. To address the challenge of long-horizon predictions, we optimize ourstate encoder φtogether with the latent dynamics h. In addition, we facilitate the approximation ofthe local and global value functions by training their models jointly with the encoding. Our modeltraining objective Lmodel is shown in Eq. 2. Here, LQlkrepresents the temporal difference (TD) loss4for training Qlk,LQgcorresponds to the TD loss for training Qg, andLhdenotes the loss functionfor the dynamics model h. The hyperparameters c0andc1act as weighting factors.Lmodel =1nensXkLQlk+c0· LQg+c1· Lh (2)LQlk=ED′[(Qlk(zt, z∗, at)−(rt+γQt+1,∗min))2] (3)LQg=ED[(Qg(zt, at)−(rt+γQg(zt+1, πg(zt+1))))2] (4)In accordance to the standard TD3-BC training objective, we simultaneously optimize the corre-sponding policies πlandπginLπlandLπg(Eq. 5). Note that this step is done by alternatingbetween optimizing Lmodel and policy improvement while the encoder parameters are kept fixedduring the policy update1.Lπl=ED′[−Qt,∗min] +c2·ED′[(πl(zt, z∗)−at)2]Lπg=ED[−Qg(zt, πg(zt))] + c3·ED[(πg(zt)−at)2](5)To provide data for training the local policy and value functions πlandQlk, we synthesize a datasetof state-reaching experiences D′by relabeling the transitions in D. More specifically, we achievethis using hindsight goal relabeling [46, 47] to sample goals s∗∈ S and use a binary reward toindicate success (see App. B.4). For training the dynamics model, we use the contrastive losspresented in [53]. In practice, we found this approach to work better in maintaining accurate long-term predictions compared to a standard mean-squared error objective (App. E.3).Algorithm 1 Node sampling and tree expansion1: Given: zinit,niter,nsim,g,h,πg,Qg,Qlk,πl2: Initialize: V ← { zinit},E ← ∅3:fornitersteps do4: Sample node zexpfromVgiven Pnode(V)5: znew←zexp6: Simulate forward using dynamics for nsimsteps7: fornsimsteps do8: Sample action a∼g(.|znew)(ora=πg(znew))9: znew←h(znew, a)10: end for11: Reject node if too close to existing one in the tree or12: if the value uncertainty is too high13: ifQexp,newmin > τlowdiscard andQexp,newstd < τstddiscard then14: ifmax{Qi,newmin|zi∈ V} < τhighdiscard then15: Add new node to tree16: V←V∪ {znew};E←E∪ {zexp→new}17: end if18: end if19:end forTree expansion Our aim is to solve theRL decision-making problem by search-ing the latent state space for the short-est connection towards valid goal states.Similar to [15], we follow the concept ofEST planners [40] which iteratively ex-pand the current set of nodes through ac-tion sampling. The tree T=(V,E)canbe seen as a growing memory of latentnodesV ⊂ Z and transitions E ⊂ Z ×Z .The core mechanism behind our expan-sion strategy is summarized in Alg. 1.We first initialize T= (V={zinit},E=∅)where zinit∈ Z is the latent encoding ofthe current state sinit∈ S obtained fromφ. For nitersteps, a node zexpand is drawnusing a categorical distribution Pnodede-fined over V. Starting from zexpand , thedynamics hrolls out a short nsim-stepstate sequence given actions drawn from our generative model g(orπg). Since Qlkestimates thereturn for reaching towards a particular node under sparse binary rewards, a temporal distance proxyis given by logγQlk. To account for value approximation errors [17], we will use the minimum valueamong the ensembles predictions to compute a conservative distance estimate. After every nsim-stepexpansion with h, we determine if the transition from zexptoznewis feasible by checking if Qexp,newminis above a threshold τlowdiscard . If it lies below this threshold, we discard znew. Secondly, we also rejectit if the corresponding value of Qexp,newstd is above a threshold τstddiscard . The purpose of this secondrejection step is filter states in which the epistemic uncertainty, i.e. model uncertainty, is high andthereby avoid the evaluation of high-uncertainty areas, for example outside the support of the latent1We optimize the state representation during the critic update instead of the policy improvement step asmotivated by the empirical analysis in [52].5data distribution. Lastly, we determine if the newly generated node is sufficiently novel given theexisting ones in Tand discard it otherwise. This sparsification step avoids redundant tree nodesand is important to keep the computations at a moderate level. More specifically, we discard znewifmax{Qi,newmin|zi∈ V} is above a threshold τhighdiscard . In other words, we find the closest neighbor zneighin the tree and reject znewif there already exists a node which can transition to it within few steps.Ifznewpasses the previous stages, it is added to T, i.e.V←V ∪ { znew}andE←E ∪ { zexp→new}. Formore details on the expansion step see App. B.6.Psparse(zi) =e−nneighi/TsparsityPzj∈Ve−nneighj/Tsparsity(6)Pvalue(zi) =eQgi/TvaluePzj∈VeQgj/Tvalue(7)Node sampling So far we haven’t defined the nodesampling distribution Pnode. To achieve fast and task-oriented exploration, we combine two sampling heuris-tics based on (a) the inverse number of neighbors aroundeach node and (b) the state-action value Qg. (a) leadsto quick exploration of undiscovered latent states, while(b) drives the planner towards high-valued areas. Forboth parts, we use exponential weighting as shown in Eq. 6 and 7. In this regard, nneighicorrespondsto the number of incoming neighbors for a node ( Vneigh→i). We compose Pnodeby sampling accordingtoPsparse with probability psparseand from Pvaluewithpvalue(otherwise random uniform).Action sampling The model gmimics the state-dependent action distribution of our data and isrepresented by a conditional V AE [30]. Sampling actions from greduces the evaluation of undesiredstate-actions pairs which were not observed in D. To help our planner discover task-relevant areasquicker, we further predict actions using πgwith a probability ppolicy.Planning objective and control Our planner builds a sparse tree representation in the latent spacewhose expansion is guided by value and sparsity heuristics. To choose the best path in T, we mustdefine an objective that ranks all explored paths. In practice, we first identify all nodes for which thevalue of Qgsurpasses a threshold τgoaland gather the associated paths from the root zinitin a set G.Among the elements in G, we then choose the path g∗which has the shortest temporal length usingQi,jminto derive a distance proxy between subsequent nodes (Eq. 8). If G=∅, we simply pick thepath that contains the node with the highest value of Qg.g∗= arg ming∈Gc(g) with c(g) =X(i,j)∈EglogγQi,jmin (8)To use our planner for control, we embed it into a model-predictive control loop (Alg. 2). Thecontroller queries our planner every nreplan steps and uses the local policy πlto navigate betweennodes in the planned sequences of latent states. If close enough, the controller switches to the nextwaypoint, which we determine by checking the value of Qi,jminagainst a threshold τwp.5 Experiments5.1 Empirical evaluation in simulationBaselines To assess the effectiveness of VELAP , we measured its performance against the followingbaselines. Behavioral cloning (BC), a simple but often successful method that imitates the behaviorpolicy using a supervised learning objective. We evaluated a second version of this approach onlytrained on the subset of successful trajectories ( D∗).TD3-BC [49], an adaptation of the TwinDelayed DDPG algorithm [54] which circumvents the effect value overestimation by adding animitation objective to the policy update. IQL [55], a state-of-the-art offline RL baseline. MPPI [9]provides the base planning algorithm in various model-based RL methods ([10, 7]). We consideran implementation for the offline learning setup which uses TD3-BC within the cost update duringoptimization. MBOP [36], a model-based agent which uses an imitation learning policy to bias theaction sampling in MPPI. IRIS [37], an offline RL method particularly designed for sparse rewardsettings. It uses a hierarchical decomposition of the policy for which a manager predicts feasiblesubgoals given future candidate states (n-step horizon) sampled from a generative model (cV AE)6(a) SpiralMaze (b) ObstacleMaze (c) WindowClose (d) FaucetClose (e) ButtonWall (f)DrawerButtonFigure 2: Evaluation environments adapted from meta-world robotics benchmark.Table 1: Success rates and std. deviations (%) on test cases with unseen object variations (except SpiralMaze ).METHOD BC BC ( D∗) TD3-BC IQL MPPI MBOP IRIS IRIS (MULTI -STEP )VELAPSPIRAL MAZE 0±0 0 ±0 0 ±0 0 ±0 0 ±0 0 ±0 0 ±0 15 ±31 94±3OBSTACLE MAZE 0±0 15 ±6 35 ±22 6 ±3 83 ±11 40 ±25 50 ±25 62 ±14 97±2WINDOW 0±0 34 ±11 16 ±8 9 ±4 70 ±7 23 ±4 69 ±3 43 ±20 78±4FAUCET 0±0 36 ±6 13 ±7 8 ±4 41 ±7 33 ±2 10 ±2 3 ±1 51±12BUTTON WALL 0±0 0 ±0 2 ±2 0 ±0 9 ±10 0 ±0 35 ±5 8 ±8 76±9DRAWER BUTTON 0±0 0 ±0 0 ±0 1 ±0 0 ±0 0 ±0 5 ±3 0 ±0 11±3which a worker policy must achieve. We also examine IRIS (multi-step) , where the set of subgoalsis generated by shooting a future state sequence using the cV AE. To establish a fair comparison anddisentangle the effects of the representation and planner, we use the same representations, dynamicsmodels across all methods. Further details are provided in App. D.Tasks We consider the simulated visuomotor control tasks depicted in Fig. 2. In SpiralMaze , thevelocity of a block robot is controlled in order to navigate from the outermost point of a maze tothe inner region. This task was designed to require farsighted planning, as the temporal distance tothe goal is approximately 150 steps. In the ObstacleMaze environment, the block robot must travelto the opposite wall of the room while two randomly positioned obstacles appear in the center ofthe workspace. Additionally, we evaluate the WindowClose andFaucetClose environments fromthe meta-world [18] benchmark. As done in [6], we use sparse binary rewards and render imagesfrom a static camera. Moreover, we propose two new settings ButtonWall andDrawerButton . InButtonWall , the robot must first navigate around a wall (varying position) and then press a button.To solve the task DrawerButton , the agent must first close a drawer and then press a button (bothrandomly initialized). Our training data Dconsists of random trajectory data and a small number ofnoisy expert demonstrations. We use a latent space of size 32 and RGB images of resolution 64×64.For details on the environments, data collection and hyperparameters see App. B.Results An overview of the numerical benchmark evaluation is given in Table 1. VELAP consis-tently outperform the baselines across all environments in terms of average episode success rate.The improvements are particularly visible in tasks which require far-sighted planning such as Spi-ralMaze andButtonWall . These results support that our tree-based exploration strategy is indeedeffective at planning for sparse-reward offline RL. Fig. 3 illustrates a 2D embedding of a latent pathfor the SpiralMaze task computed with VELAP . It suggests that our method explores global solutionsand identifies one which reaches through the entire maze. Fig. 3 demonstrates similar capabilitiesfor the ObstacleEnv task. Further, it supports that our representation accommodates for the differentlocations of obstacles, creating latent spaces that mirror the topology of the underlying state space.Ablation results concerning the embedding and dynamics model are provided in App. E.5.2 Physical robot experimentsTo test our method under realistic conditions, we designed two manipulation tasks using a low-cost robot arm (Fig. 4). In the first, the robot must push a sponge into the marked region. Inthe second task, the robot holds a rope which it must unravel from a cylindrical object. Then itneeds to position the held end of the rope precisely onto a designated colored area. The robot iscontrolled by providing desired end effector position and wrist orientation displacements, resultingin a 4-dimensional action space. Both tasks are challenging in terms of perception and control as7(a) (b) (c) (d) (e) (f)Figure 3: Supporting visualizations for SpiralMaze (top row) and ObstacleMaze (bottom row) tasks. (a) x-yrobot positions for a uniformly sampled set of states (b). 2D Isomap embeddings of learned latent space (colorencodes correspondence to robot positions in (a)).(c) approximated global Q-values for x-y robot positions. (d)approximated global Q-values for Isomap embedding of learned latent state space. (e) example image inputframes. (f) planned latent paths (red) computed with VELAP and projected to Isomap embedding.Figure 4: Physical manipulation of asponge (left) and rope (right).the initial configuration of the objects are randomized. Theresults of a comparison with BC, BC ( D∗) and IRIS are pre-sented in Table 11 (more details in App. E.1). Our method sur-passes the performance of its baselines achieving 14/20suc-cessful episode on the sponge manipulation and 8/20on therope manipulation task. Video demonstrations can be found athttps://sites.google.com/view/velap-corl/home.6 Limitations and Future DirectionsOur method lays the groundwork for future enhancements. While VELAP was designed for offlineRL, it can be adapted to the online setting by interleaving data collecting and learning [3, 6]. Itcould also enhance sample efficiency by improving policy and critic updates through planning. [56,4, 31]. Currently, VELAP uses deterministic dynamics and encoder models, limiting it to fully-observable MDPs. By incorporating probabilistic transition models and state filtering approaches(similar to [3, 57]), it can be extended to partially observable and stochastic settings. While ourmethod outperforms existing baselines, it still struggles with complex tasks which we account tothe remaining difficulty in estimating accurate latent dynamics for long-horizon planning. Presently,our planner selects paths by minimizing the distance to high-valued states. To further minimizethe effects of model inaccuracies, the planning strategy could incorporate uncertainty propagationand assessment in the tree branches. Notably, we discovered that in more complex tasks, such asDrawerButton and the rope manipulation scenario, failures often occurred when the agent movedto the final goal region without completing the required subtask, like first closing the drawer ormaneuvering the rope around the pole. We believe that improved handling of uncertainties, alongwith risk-aware measures, could potentially lessen the planner’s greedy exploitation of model errors.In this context, planning in belief spaces [58] provides another potential improvement avenue.7 ConclusionWe introduced VELAP , an agent designed for model-based planning in sparse reward offline RL.Diverging from the usual model-based RL planners, our approach employs a tree-based explorationalgorithm inspired by sampling-based planners commonly used in robot motion planning. Throughempirical evaluation on visual control tasks, we showcased substantial enhancements achieved byour method compared to existing benchmarks. Our aim is that these outcomes will inspire additionalexploration into the fusion of sampling-based planning, representation learning, and model-basedRL8AcknowledgmentsThis work was partially supported by the Wallenberg AI, Autonomous Systems and Software Pro-gram (WASP) funded by the Knut and Alice Wallenberg Foundation.References[1] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction . The MIT Press,second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html .[2] T. M. Moerland, J. Broekens, and C. M. Jonker. Model-based reinforcement learning: Asurvey. arXiv preprint arXiv:2006.16712 , 2020.[3] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. In K. Chaudhuri and R. Salakhutdinov, editors, Proceed-ings of the 36th International Conference on Machine Learning , volume 97 of Proceedings ofMachine Learning Research , pages 2555–2565, 09–15 Jun 2019.[4] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors bylatent imagination. In International Conference on Learning Representations , 2020. URLhttps://openreview.net/forum?id=S1lOTC4tDS .[5] K. Liu, T. Kurutach, C. Tung, P. Abbeel, and A. Tamar. Hallucinative topological memory forzero-shot visual planning. In H. D. III and A. Singh, editors, Proceedings of the 37th Inter-national Conference on Machine Learning , volume 119 of Proceedings of Machine LearningResearch , pages 6259–6270. PMLR, 13–18 Jul 2020. URL http://proceedings.mlr.press/v119/liu20h.html .[6] O. Rybkin, C. Zhu, A. Nagabandi, K. Daniilidis, I. Mordatch, and S. Levine. Model-basedreinforcement learning via latent-space collocation. In M. Meila and T. Zhang, editors, Pro-ceedings of the 38th International Conference on Machine Learning , volume 139 of Pro-ceedings of Machine Learning Research , pages 9190–9201. PMLR, 18–24 Jul 2021. URLhttps://proceedings.mlr.press/v139/rybkin21b.html .[7] N. Hansen, X. Wang, and H. Su. Temporal difference learning for model predictive control. InICML , 2022.[8] Z. I. Botev, D. P. Kroese, R. Y . Rubinstein, and P. L’Ecuyer. Chapter 3 - the cross-entropymethod for optimization. In C. Rao and V . Govindaraju, editors, Handbook of Statistics ,volume 31 of Handbook of Statistics , pages 35–59. Elsevier, 2013. doi:https://doi.org/10.1016/B978-0-444-53859-8.00003-5. URL https://www.sciencedirect.com/science/article/pii/B9780444538598000035 .[9] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving withmodel predictive path integral control. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1433–1440, 2016. doi:10.1109/ICRA.2016.7487277.[10] A. Nagabandi, K. Konoglie, S. Levine, and V . Kumar. Deep Dynamics Models for LearningDexterous Manipulation. In Conference on Robot Learning (CoRL) , 2019.[11] H. Sikchi, W. Zhou, and D. Held. Learning off-policy with online planning. In Conference onRobot Learning , pages 1622–1633. PMLR, 2022.[12] A. Srinivas, A. Jabri, P. Abbeel, S. Levine, and C. Finn. Universal planning networks: Learn-ing generalizable representations for visuomotor control. In J. Dy and A. Krause, editors,Proceedings of the 35th International Conference on Machine Learning , volume 80 of Pro-ceedings of Machine Learning Research , pages 4732–4741. PMLR, 10–15 Jul 2018. URLhttp://proceedings.mlr.press/v80/srinivas18b.html .9[13] S. M. LaValle. Planning Algorithms . Cambridge University Press, Cambridge, U.K., 2006.Available at http://planning.cs.uiuc.edu/.[14] B. Ichter, P. Sermanet, and C. Lynch. Broadly-exploring, local-policy trees for long-horizon task planning. In 5th Annual Conference on Robot Learning , 2021. URL https://openreview.net/forum?id=yhy25u-DrjR .[15] R. Gieselmann and F. T. Pokorny. Latent planning via expansive tree search. In A. H. Oh,A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information ProcessingSystems , 2022. URL https://openreview.net/forum?id=zSdz5scsnzU .[16] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[17] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. In International Conference on Machine Learning , pages 2052–2062, 2019.[18] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onRobot Learning (CoRL) , 2019. URL https://arxiv.org/abs/1910.10897 .[19] I. Goodfellow, Y . Bengio, and A. Courville. Deep Learning . MIT Press, 2016. http://www.deeplearningbook.org .[20] C. Finn, I. Goodfellow, and S. Levine. Unsupervised learning for physical interactionthrough video prediction. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Gar-nett, editors, Advances in Neural Information Processing Systems , volume 29. CurranAssociates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/d9d4f495e875a2e075a1a4a6e1b9770f-Paper.pdf .[21] F. Ebert, C. Finn, S. Dasari, A. Xie, A. Lee, and S. Levine. Visual Foresight: Model-Based Deep Reinforcement Learning for Vision-Based Robotic Control. arXiv e-prints , art.arXiv:1812.00568, Dec. 2018.[22] L. Yen-Chen, M. Bauza, and P. Isola. Experience-embedded visual foresight. In Conferenceon Robot Learning , 2019.[23] S. Nair and C. Finn. Hierarchical foresight: Self-supervised learning of long-horizon tasks viavisual subgoal generation. In International Conference on Learning Representations , 2020.URL https://openreview.net/forum?id=H1gzR2VKDH .[24] S. Tian, S. Nair, F. Ebert, S. Dasari, B. Eysenbach, C. Finn, and S. Levine. Model-based visualplanning with self-supervised functional distances. In International Conference on LearningRepresentations , 2021. URL https://openreview.net/forum?id=UcoXdfrORC .[25] N. Savinov, A. Dosovitskiy, and V . Koltun. Semi-parametric topological memory for nav-igation. In International Conference on Learning Representations , 2018. URL https://openreview.net/forum?id=SygwwGbRW .[26] B. Eysenbach, R. Salakhutdinov, and S. Levine. Search on the replay buffer: Bridging planningand reinforcement learning. In Advances in Neural Information Processing Systems , 2019.[27] M. Laskin, S. Emmons, A. Jain, T. Kurutach, P. Abbeel, and D. Pathak. Sparse graphicalmemory for robust planning. In Advances in Neural Information Processing Systems , 2020.[28] T. Yu, G. Shevchuk, D. Sadigh, and C. Finn. Unsupervised visuomotor control through dis-tributional planning networks. In Proceedings of Robotics: Science and Systems , Freiburgim-Breisgau, Germany, June 2019. doi:10.15607/RSS.2019.XV .020.10[29] S. Nasiriany, V . Pong, S. Lin, and S. Levine. Planning with goal-conditioned poli-cies. In Advances in Neural Information Processing Systems , volume 32. Curran As-sociates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/c8cc6e90ccbff44c9cee23611711cdc4-Paper.pdf .[30] D. P. Kingma and M. Welling. Auto-encoding variational bayes, 2014.[31] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lock-hart, D. Hassabis, T. Graepel, T. Lillicrap, and D. Silver. Mastering atari, go, chess andshogi by planning with a learned model. Nature , 588(7839):604–609, dec 2020. doi:10.1038/s41586-020-03051-4. URL https://doi.org/10.1038%2Fs41586-020-03051-4 .[32] J. Schrittwieser, T. K. Hubert, A. Mandhane, M. Barekatain, I. Antonoglou, and D. Silver. On-line and offline reinforcement learning by planning with a learned model. In A. Beygelzimer,Y . Dauphin, P. Liang, and J. W. Vaughan, editors, Advances in Neural Information ProcessingSystems , 2021. URL https://openreview.net/forum?id=HKtsGW-lNbw .[33] W. Ye, S. Liu, T. Kurutach, P. Abbeel, and Y . Gao. Mastering atari games with limited data. InNeurIPS , 2021.[34] R. Coulom. Efficient selectivity and backup operators in monte-carlo tree search. CG’06, page72–83, Berlin, Heidelberg, 2006. Springer-Verlag. ISBN 3540755373.[35] T. Hubert, J. Schrittwieser, I. Antonoglou, M. Barekatain, S. Schmitt, and D. Silver. Learn-ing and planning in complex action spaces. In M. Meila and T. Zhang, editors, Proceed-ings of the 38th International Conference on Machine Learning , volume 139 of Proceed-ings of Machine Learning Research , pages 4476–4486. PMLR, 18–24 Jul 2021. URLhttps://proceedings.mlr.press/v139/hubert21a.html .[36] A. Argenson and G. Dulac-Arnold. Model-based offline planning. In International Con-ference on Learning Representations , 2021. URL https://openreview.net/forum?id=OMNB1G5xzd4 .[37] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicitreinforcement without interaction at scale for learning control from offline robot manipulationdata. In 2020 IEEE International Conference on Robotics and Automation (ICRA) , pages4414–4420. IEEE, 2020.[38] B. Ichter, J. Harrison, and M. Pavone. Learning sampling distributions for robot motion plan-ning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) , pages7087–7094, 2018. doi:10.1109/ICRA.2018.8460730.[39] S. M. Lavalle. Rapidly-exploring random trees: A new tool for path planning. Technical report,1998.[40] D. Hsu, J.-C. Latombe, and R. Motwani. Path planning in expansive configuration spaces. InProceedings of International Conference on Robotics and Automation , volume 3, pages 2719–2726 vol.3, 1997. doi:10.1109/ROBOT.1997.619371.[41] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. To-bin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neuralinformation processing systems , 30, 2017.[42] B. Eysenbach, X. Geng, S. Levine, and R. R. Salakhutdinov. Rewriting history with inverserl: Hindsight inference for policy improvement. Advances in neural information processingsystems , 33:14783–14795, 2020.[43] A. Levy, R. Platt, and K. Saenko. Hierarchical reinforcement learning with hindsight. InInternational Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=ryzECoAcY7 .11[44] T. Davchev, O. O. Sushkov, J.-B. Regli, S. Schaal, Y . Aytar, M. Wulfmeier, and J. Scholz.Wish you were here: Hindsight goal selection for long-horizon dexterous manipulation. InInternational Conference on Learning Representations , 2022. URL https://openreview.net/forum?id=FKp8-pIRo3y .[45] A. Li, L. Pinto, and P. Abbeel. Generalized hindsight for reinforcement learning. Advances inneural information processing systems , 33:7754–7767, 2020.[46] Y . Chebotar, K. Hausman, Y . Lu, T. Xiao, D. Kalashnikov, J. Varley, A. Irpan, B. Eysen-bach, R. C. Julian, C. Finn, and S. Levine. Actionable models: Unsupervised offline re-inforcement learning of robotic skills. In M. Meila and T. Zhang, editors, Proceedingsof the 38th International Conference on Machine Learning , volume 139 of Proceedingsof Machine Learning Research , pages 1518–1528. PMLR, 18–24 Jul 2021. URL https://proceedings.mlr.press/v139/chebotar21a.html .[47] J. Li, C. Tang, M. Tomizuka, and W. Zhan. Hierarchical planning through goal-conditionedoffline reinforcement learning. IEEE Robotics and Automation Letters , 7(4):10216–10223,2022. doi:10.1109/LRA.2022.3190100.[48] B. Eysenbach, T. Zhang, S. Levine, and R. Salakhutdinov. Contrastive learning as goal-conditioned reinforcement learning. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho,editors, Advances in Neural Information Processing Systems , 2022. URL https://openreview.net/forum?id=vGQiU5sqUe3 .[49] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. In Thirty-Fifth Conference on Neural Information Processing Systems , 2021.[50] G. An, S. Moon, J.-H. Kim, and H. O. Song. Uncertainty-based offline reinforcement learningwith diversified q-ensemble. In Neural Information Processing Systems , 2021.[51] N. Lambert, B. Amos, O. Yadan, and R. Calandra. Objective mismatch in model-based rein-forcement learning. arXiv preprint arXiv:2002.04523 , 2020.[52] D. Yarats, A. Zhang, I. Kostrikov, B. Amos, J. Pineau, and R. Fergus. Improving sampleefficiency in model-free reinforcement learning from images. In Proceedings of the AAAIConference on Artificial Intelligence , volume 35, pages 10674–10681, 2021.[53] A. v. d. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[54] S. Fujimoto, H. Hoof, and D. Meger. Addressing function approximation error in actor-criticmethods. In International conference on machine learning , pages 1587–1596. PMLR, 2018.[55] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.arXiv preprint arXiv:2110.06169 , 2021.[56] R. S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting. SIGARTBull. , 2(4):160–163, jul 1991. ISSN 0163-5719. doi:10.1145/122344.122377. URL https://doi.org/10.1145/122344.122377 .[57] N. Srivastava, W. Talbott, M. B. Lopez, S. Zhai, and J. Susskind. Robust robotic control frompixels using contrastive recurrent state-space models. arXiv preprint arXiv:2112.01163 , 2021.[58] V . Indelman, L. Carlone, and F. Dellaert. Towards planning in generalized belief space. InInternational Symposium on Robotics Research (ISRR) , December 2013.[59] J. Bialkowski, S. Karaman, and E. Frazzoli. Massively parallelizing the rrt and the rrt. In2011 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 3513–3518, 2011. doi:10.1109/IROS.2011.6095053.12[60] B. Ichter. Massive Parallelism and Sampling Strategies for Robust and Real-Time RoboticMotion Planning . PhD thesis, Stanford University, 2018.[61] J. B. Tenenbaum, V . de Silva, and J. C. Langford. A global geometric framework for nonlineardimensionality reduction. Science , 290(5500):2319–2323, 2000. doi:10.1126/science.290.5500.2319. URL https://www.science.org/doi/abs/10.1126/science.290.5500.2319 .[62] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012. doi:10.1109/IROS.2012.6386109.[63] M. Raisi, A. Noohian, L. McCutcheon, and S. Fallah. Value summation: A novel scoringfunction for mpc-based model-based reinforcement learning. ArXiv , abs/2209.08169, 2022.13A Additional materialCode examples and additional material will be uploaded here https://sites.google.com/view/velap-corl/home. All models were implemented in Python using the PyTorch library. The total trainingtime including all models and baselines amounts to approximately 150 hours (wall clock) on a singleGPU.B Hyperparameters and algorithm detailsHere we present a description of the hyperparameters of our trained models and planning module.B.1 Model architecturesTable 2: Hyperparameters of the encoder φParameter ValueBatch-norm. yesFilters [32,32,64,64]Kernels [4,4,4,4]Strides [2,2,2,2]Activation LeakyReluDense layers [256, 128, 32]Table 3: Hyperparameters of dynamics model hParameter ValueBatch-norm. yesActivation LeakyReluDense layers [128,128,128,128,32]Table 4: Hyperparameters of action sampler g(β-V AE)Parameter ValueBatch-norm. yesActivation LeakyReluLatent dimension 16β(kl-weight) 0.01Encoder dense layers [128,128,128, 2*16]Decoder dense layers [128,128,128, daction,]Table 5: Hyperparameters of policy networks πlandπgParameter ValueBatch-norm. yesActivation LeakyReluDense layers [128,128,128, daction]14Table 6: Hyperparameters of critic networks QlkandQgParameter ValueBatch-norm. yesActivation LeakyReluDense layers [128,128,128, 1]B.2 Training hyperparametersTable 7: Model training hyperparametersParameter Valuebatch size 64learning rate 0.0003c0 0.2c1(SpiralMaze ) 0.001c1(ObstacleMaze ) 0.01c1(metaworld tasks) 0.01c2 0.001c3 0.001c3(expert) 0.5γ(discount factor) 0.96dZ 32T(temperature) 1.0nens 3B.3 Planner and controller hyperparametersTable 8: Hyperparameters of plannerName Description Valueniter Number of planner iterations 250 (500 in ButtonWall )nsim Number of simulation steps during tree expansion5 (10 in SpiralMaze ,ButtonWall andDrawerButton )τhighdiscardQ-value threshold for discarding nodeif too close to existing nodes in the treeγ2τlowdiscardQ-value threshold for discarding nodeif too far from expansion nodeγnsimτstddiscardQ-value threshold for discarding nodeif standard deviation of ensemble prediction is too high1.0−γτneigh Q-value threshold to determine neighboring nodes γ3τgoal Q-value threshold to determine goal nodes γ5dneighEuclidean distance thresholdto determine candidate neighbors3 x upper 5-percentile of Eucl. distancesbetween encoding of subsequent statesTable 9: Hyperparameters of controllerParameter Description Valuenreplan Planning frequency 15 (25 in SpiralMaze ,ButtonWall )τstop Q-threshold to stop planning when close to the goal γ5τwp Q-threshold for switching to the next waypoint γ315B.4 Training of policy and value functionsWe use TD3-BC [49] as the base offline RL algorithm to train our local and goal policies πlandπg,respectively state-action value functions a QlkandQg. Within our planning framework Qlktakes animportant role as it provides us with a distance proxy. To further improve the accuracy of Qlk, we usenensQ-networks (instead of 2 usually used in TD3). During the training update of the Q-network,we then determine the Q-target by taking the minimum value among the predictions given by theensemble of Q-networks (similar to [50]). The ensemble further allows us to filter out unlikely orout-of-distribution transitions generated during the tree expansion. This is done by assessing theminimum predicted ensemble Q-value and the standard deviation among the predicted values (Sec.4).Our models πlandQlkdescribe goal-reaching policy and state-action value functions which requirea set of goal-conditioned reaching experiences for training. Since our original dataset Dmight notdescribe this particular setting, we can augment it using hindsight goal relabeling. In particular, wecreate a new dataset D′consisting of transitions ( zt,at,rt,zt+1,z∗,γ)∈ D′by relabeling the valuesofrt,γ(γalso indicates terminal condition, i.e. γ= 0) and adding a goal state z∗. We apply acombination of three different relabeling strategies (a) set goal z∗to be next state in the relabeledtransition and set γ= 0;rt= 1(b) sample z∗from the set of future states within the same trajectoryand set rt= 0(c) sample z∗from another trajectory in the data and set rt= 0.B.5 Training of dynamics modelOur dynamics model his trained using the InfoNCE contrastive loss introduced in [53]. Givenan initial state ztand a sequence of actions at:t+k−1, we use hto generate predictions ezt+kfori=1..k. We compute the NCE loss at each each kwith positive pairs (ezt+k, zt+k)and take negativeexamples zjrandomly from the batch. We use f=e−||zi−zj||2/Tto compute similarity betweenlatent encodings. Our overall training loss for h(Eq. 9) takes the average over the contrastive lossterms computed at all ksteps. For all experiments, we use k= 3.Lh=−1kXkEDhlogf(ezt+k, zt+k)Pjf(ezt+k, zj)i(9)B.6 Additional details about planning methodNeighbor computation To determine if a newly sampled node znewis novel, we check its sim-ilarity to existing nodes in the tree by evaluating the state-action value function. Computing thegoal-conditioned values with respect to all nodes results in an enormous computations overhead.Yet, we can significantly reduce the amount of computation by first determining a set of candidateneighbors around znewusing the Euclidean metric and a distance threshold dneigh. In practice, wefound it useful to define dneigh based on the statistics of Euclidean distances between subsequentstates in the dataset (see App. B.3).Batch processing The method in Alg. 1 describes an iterative schema for which at every expan-sion step one new node is generated and evaluated. Yet, some steps can be computed in parallel ona GPU in order to speed up the planning time. For a practical implementation, we therefore suggestto parallelize the tree expansion by sampling multiple expansion nodes at once and generating newnodes by passing batches through the neural network dynamics model. Similarly, we can computestate-action values in batches instead of assessing one new nodes at a time. For discussions abouthighly-parallelized implementations of classical RRT-like planners, we refer to [59, 60].B.7 Additional details about MPC controllerAlg. 2 provided below outlines the pseudocode for our MPC controller. The functionupdate waypoint (zcurr, g∗)is responsible for determining the subsequent waypoint zwpthat our16local policy aims to attain. Specifically, we estimate the value between the current state and way-point and switch to the next element in g∗if the predicted value surpasses a threshold τwp, i.eQcurr,wpmin > τ wp. As we approach the final goal, indicated by the proximity of our current state zcurr,we stop the planning and compute actions based on the policy πg. To ascertain our proximity to thegoal, we compare the predicted value of the global value function Qgagainst a predefined thresholdasτgoal.Algorithm 2 MPC controllerGiven: sinit,nreplannmax steps,φ,πl,πgzcurr←φ(sinit) ▷Map state to latent encodingi←0while goal not achieved and i < n max stepsdoifnotimod nreplan then ▷Replan every nreplan stepsBuild tree Trooted at zcurrfornitersteps (Alg. 1).Determine g∗={zcurr, z1, .., z n}givenTend ifzwp←update waypoint (zcurr, g∗) ▷Update waypoint if close enougha←πl(zcurr, zwp) (or use πgwithin proximity to the overall goal) ▷Compute next actionExecute aand observe updated state scurrzcurr←φ(scurr)i←i+ 1end whileC Evaluation EnvironmentsC.1 Description of block environmentsSimilar to the evaluation environments in [15], we implement two long-horizon navigation taskscharacterized by a comparatively low-dimensional underlying state space. This aspect facilitatesa visual examination of the learned latent embeddings via dimensionality reduction methodologiessuch as Isomap [61]. Within both scenarios, a block-shaped robot’s motion is controlled throughvelocity commands, while its movement remains confined to a two-dimensional plane.C.1.1 SpiralMazeTo accomplish this objective, the block agent is tasked with maneuvering from the outer edge of aspiral-shaped corridor to the inner area colored in red (Fig. 2a). The episode’s duration is cappedat 300 steps. During data generation for training, the agent is initialized at collision-free positionswithin the workspace. Subsequently, random action sequences are executed by sequentially addingGaussian noise to an initially sampled uniformly random action at the start of each episode. Fortesting purposes, the agent’s position is uniformly sampled from a small area in proximity to theoutermost point of the spiral-shaped corridor.C.1.2 ObstacleMazeWithin this setting, the agent is required to move towards the upper wall of the workspace, high-lighted in red (Fig. 2b). To successfully accomplish this objective, the agent needs to executeactions that navigate around two obstacles, positioned randomly near the workspace center at thebeginning of each new episode. The maximum permissible number of steps in the environment is100. During testing, the agent’s initial configuration is randomly set in close proximity to the wallopposite to the goal. The same data collection policy as for the SpiralMaze task was utilized.C.2 Description of manipulation environmentsWe have customized several environments of the meta-world robot benchmark tasks proposed by[18]. The underyling physics simulation engine is Mujoco, as introduced by [62]. To enable vi-17sual manipulation, similar to the problems studied in [6], we enable background rendering of RGBimages from a static viewpoint. The robot is controlled by commanding desired end effector andgripper opening displacements resulting in a 4-dimensional action space. While WindowClose andFaucetClose were with small modifications adapted from [18], we evaluate two new environmentsButtonWall andDrawerButton . These new scenarios were purposely designed to investigate taskswith long horizons under sparse rewards. Importantly, they require the integration (”stitching”) oftrajectory data originating from different regions of the workspace to determine viable solutions.In our data collection process, we employ a suboptimal policy that predominantly applies randomactions (using additive Gaussian noise). Infrequently, this policy selects actions from a pre-definedexpert policy. Table 10 provides insight about the number of samples and trajectories in the trainingdata and presents the portion of successful transitions (reward=1). Across all manipulation tasks, weset the maximum permitted number of environment steps at 150, with the exception of the Button-Wall scenario, where we allow up to 250 steps during the evaluation phase.C.2.1 WindowCloseIn order to accomplish this task, the robotic arm must successfully open a window by shifting aspecific handle sideways. We implement environmental variability by randomly determining the x-ylocation of the window object in each episode. During the data collection stage, we randomly setthe positioning of the end effector above the surface of the table. The sampling of expert actions isrestricted to areas close to the goal region (window handle). This approach is intended to guaranteethat the strategy employed necessitates to ”stitch” different trajectories together to reach the objec-tive and complete the task when starting from states that are farther away. To ensure challengingplanning situations during testing, we initiate the robot at a significant distance away from the target.C.2.2 FaucetCloseThis task is similar to the WindowClose task, but it requires the agent to use its end effector to close afaucet instead. In addition, we employ analogous strategies for data gathering and scenario creationas those used in the WindowClose environment.C.2.3 ButtonWallIn this scenario, the robot’s end effector is required to navigate around a wall structure before press-ing a button. The location of the wall is randomly set at the beginning of each episode. Furthermore,a height limitation is imposed on the end effector to ensure that the agent takes a more extendedpath around the wall, as opposed to simply elevating the end effector. The dataset was produced byeither placing the agent in front of the wall, near the button, or far behind the wall. However, expertsamples in the dataset only exist for scenarios when starting closer to the goal. For testing purposes,the end effector is sampled within a restricted region behind the wall.C.2.4 DrawerButtonIn this scenario, the agent is tasked to first close a drawer using its end effector and subsequentlypress a button. To train the agent, we develop a dataset by separately collecting trajectories for eachsubtask. This approach necessitates the use of a method capable of combining different trajectoriesin the data to devise a solution that achieves the overall task goal.C.3 Composition of training datasetThe table below presents the composition of our training datasets. Each context refers to a newenvironment initialization (excl. agent) such as the position of obstacles.18Table 10: Composition of training datasets for each environmentEnvironment Num. contexts Traj. per context Max. traj. length Successful transitionsSpiralMaze 1 1000 20 0.12 %ObstacleMaze 250 20 20 0.11 %WindowClose 200 10 50 0.48 %ButtonWall 200 10 50 0.16 %FaucetClose 200 10 50 0.31 %DrawerButton 150 20 50 0.16 %D Baseline methodsTo enable a fair comparison between different methods, we use the same underlying representa-tion/encoder φand dynamics model hin the evaluation of all baselines. For assessing the qualityand impact of our representation learner, please review the experimental ablation study in App. E.2.D.1 BC and BC ( D∗)Simple behavioral cloning baselines for which we use the same network architecture as our policynetworks (see Table 5) and train using a mean-squared error objective on the predicted actions. ForD∗we train only on the subset of successful episodes in the dataset. For each method, we train for3·105iterations using a learning rate of 3·10−4and batches of size 128.D.2 TD3-BC [49]This baselines resembles the underlying global policy πgused in VELAP . It provides us with a base-line to assess how well pure offline RL performs without any additional planning methods.D.3 IQL [55]This method presents a state of the art model-free offline RL baseline which utilizes expectile regres-sion to estimate state-conditional expectiles of the target values in order to avoid querying values ofout-of-distribution actions during training. We train IQL for 3·105iterations with a learning rate of3·10−4, batches of size 256 and using hyperparameters τ= 0.7andβ= 3.0.D.4 MPPIWe implement a trajectory optimization baseline similar to the model-based planning algorithmintroduced in [7]. The method in [7] presents an adaption of MPPI specifically for the online rein-forcement learning setting which optimizes the expected return of sampled trajectories. To estimatethe return, a learned model is used to predict the reward for each trajectory node while a learnedQ-function predicts the future return beyond the specified planning horizon. Since rewards in ourevaluation environments are sparse, the predicted rewards carry little guidance for the trajectory op-timization as most states have 0 reward. Therefore, we adapt the objective in [7] and instead use theaccumulated sum of state-action values as the optimization criterion. This type of scoring functionin model-based RL has recently been discussed in [63]. To implement this baseline, we utilize theQ-function of TD3-BC. For all environments, we use 1000 samples per iteration, a planning horizonof 50, elite size 64 and 5 iterations. Replanning is done every 5 environment steps.D.5 MBOP [36]MBOP presents an adaptation of MPPI which was particularly designed for the offline RL setting.It generates new candidate trajectories by addding a small amount of Gaussian noise to the actionspredicted by a behavioral-cloned policy. To evaluate the quality of the rollouts it uses a truncatedvalue function trained on the offline data. Due to the sparse nature of rewards in our experiments, we19found that both the behavioral-cloned policies and the truncated value function were insufficient togenerated farsighted behaviors that solve our tasks. To accommodate for the long planning horizons,we instead sample actions from our TD3-BC policy and use the corresponding Q-values to assesscandidate trajectories during the optimization (similar to our MPPI baseline). For all environments,we use 1000 samples per iteration, a planning horizon of 50, elite size 64, 5 iterations and β= 0.7.Replanning is done every 5 environment steps.D.6 IRIS [37]D.7 IRIS (multi-step)We evaluate an extension of IRIS in which we use the state prediction model (conditional V AE)to generate multi-step rollouts of suitable subgoals. This strategy increases the exploration horizonand allows to choose the best subgoal from a larger and potentially more diverse set of states. Thisplanning strategy can also be seen as random shooting of coarse subgoal sequences. In all experi-ments, we generate 256 different trajectories using rollouts of length 5 and a conditional generativemodel to predict states for a horizon of 5. In our evaluation, we found that this method sometimesperforms worse than IRIS. We attribute this to the fact that the global policy doesn’t align with thecapabilities of the local one, which occasionally results in the selection of subgoal states that mightnot be attainable.E Supplementary Experiments and AnalysisE.1 Physical hardware experimentsThe physical evaluation was performed using the WidowX 200 low-cost robot platform. For thereal-world validation of our method, we collected 200 episodes of data for the sponge ( ∼15000samples) and 150 episodes of data for the rope manipulation ( ∼15000 samples) tasks. Trainingdata was generated by operating the robot through a gamepad and took less than 1 hour per task.The collected dataset consist largely of suboptimal and entirely random trajectories. Successfultransitions (positive reward + episode termination) were labeled manually during the data collection.Similar to our simulated datasets, we collect trajectories in such a way that successful episodesalways start within the vicinity of the goal. Conversely, during testing, we deliberately positionthe agent distant from the goal region. Consequently, this configuration emphasizes the need foran approach capable of internally assessing the connectivity between distinct trajectory segmentswithin the data. To construct states, we combine three sequential images captured by a stationarycamera. The results of a comparison with BC, BC ( D∗) and IRIS are presented in Table 11.Table 11: Results of phyiscal robot experiments (successful episodes)Environment BC BC ( D∗) IRIS VELAPSponge 5/20 6/20 6/20 14/20Rope 0/20 0/20 2/20 8/20Sponge task In this setting, the robot needs to push a sponge object onto a marked goal region(Fig. 4 (left)). To increase the difficulty of this task, we initialize the robot end effector betweenthe goal region and the sponge. Consequently, the robot must initially maneuver itself behind thesponge before it can proceed to push the sponge towards the goal. During both training and testing,the initial poses of the end effector and sponge object were subject to random selection.Rope task In this particular setting, the robot’s end effector grasps a green rope, requiring the robotto skillfully maneuver the rope around a central pole in the workspace before ultimately placing theheld end within a designated goal area (as shown in Figure 4 (right)). To elevate the complexity ofthis scenario, the agent is required to first unwind the rope from the pole before moving towards the20goal region. To introduce a challenging long-horizon aspect, we initialize the end effector in closeproximity to the goal region while the rope is partially wrapped around the pole.E.2 Ablating the impact of the learned representationSpiralMaze ObstacleMaze020406080100Success rate (%)94979375053RepresentationoursCPCVAE+DynFigure 5: Impact of the type of representation on the performance of our method.E.3 Influence of the dynamics lossSpiralMaze ObstacleMaze020406080100Success rate (%)9497035Dynamics losscontrastivemean-squared errorFigure 6: Impact of the type of dynamics loss on the performance of our method.21E.4 Computation timeHere, we provide an assessment of the computation time needed and the resulting success ratesachieved by our approach in comparison to MPPI. Both algorithms were tested on hardware featur-ing a NVIDIA GeForce RTX 3090 graphics card. Despite utilizing GPU computation and imple-menting both methods in PyTorch, we didn’t specifically fine-tune either for computational speed. Itis worth mentioning that MPPI recalculates its plans every 5 steps within the environment, whereasour method follows a 25-step interval in the SpiralMaze and 15 steps in the ObstacleMaze .0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6Runtime (seconds)0.00.20.40.60.81.0Success rateniter=50niters=100niters=250niters=500(5,50,1000) (10,50,1000) (10,100,2000) (25,100,5000)SpiralEnvVELAPMPPIFigure 7: Relationship between average episode success rates and single planning query runtime (test scenarios)for different planning hyperparameters on SpiralMaze environment. For MPPI, we report results for varying(iterations, horizon, number of samples).0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6Runtime (seconds)0.700.750.800.850.900.951.00Success rateniter=50niters=100niters=250niters=500(5,50,1000)(10,50,1000)(10,100,2000)(25,100,5000)ObstacleMazeVELAPMPPIFigure 8: Relationship between average episode success rates and single planning query runtime (test scenarios)for different planning hyperparameters on ObstacleMaze environment. For MPPI, we report results for varying(iterations, horizon, number of samples).22 |
nNsZxc2cmO | FindThis: Language-DrivenObject Disambiguation in Indoor EnvironmentsArjun Majumdar2;Fei Xia1Brian Ichter1Dhruv Batra2Leonidas Guibas1;31Google DeepMind2Georgia Institute of Technology3Stanford UniversityWork done while at Robotics at Google.Abstract: Natural language is naturally ambiguous. In this work, we consider interac-tions between a user and a mobile service robot tasked with locating a desired object,specified by a language utterance. We present a task FindThis , which addresses the prob-lem of how to disambiguate and locate the particular object instance through a dialogwith the user. To approach this problem we propose an algorithm, GoFind , which ex-ploits visual attributes of the object that may be intrinsic (e.g., color, shape), or extrinsic(e.g., location, relationships to other entities), expressed in an open vocabulary. GoFindleverages the visual common sense learned by large language models to enable fine-grained object localization and attribute differentiation in a zero-shot manner. We alsoprovide a new visio-linguistic dataset, 3D Objects in Context ( 3DOC ), for evaluatingagents on this task, consisting of Google Scanned Objects placed in Habitat-Matterport3D scenes. Finally, we validate our approach on a real robot operating in an unstructuredphysical office environment using complex fine-grained language instructions.Find my laptop pleaseNo, mine has a sticker on it.Parsed attributes Foreground: 'laptop'Distractors: ['tablet', 'phone', 'computer', 'book', 'paper', 'folder', ‘notebook', 'binder', 'desk', 'table', 'counter']Attribute: NoneParsed attributes Foreground: 'laptop'Distractors: ['tablet', 'phone', 'computer', 'book', 'paper', 'folder']Attribute: stickerLanguage-driven Object DisambiguationRound 1Round 2Figure 1: We propose FindThis : an object navigation task in which an agent engages in multiple rounds ofopen-vocabulary, natural language interaction to locate a specific object in a complex environment. Wepresent GoFind , an approach to parse the instruction and robustly find objects by identifying attributes andremoving distractors. In this example, though the robot initially finds a laptop, the user is able to provideadditional attributes (‘stickers on it’) to identify the correct object.1 IntroductionA long-standing goal in robotics is to develop agents that can follow natural language instructions tofind objects in the real world. From the onset, researchers have envisioned systems capable of identifyingspecific objects through a free-flowing dialog – allowing users to specify and clarify instructions (e.g.,SHRDLU [ 1]). Recently, progress has been made by on sub-tasks within this grand challenge. For example,one line of research has centered on grounding referential language (e.g., ‘Find the lamp next to the sofa. ’ )in fully-observable 2D images [ 2] or 3D scenes [ 3,4,5]. Complementary work has focused on developingagents that can visually navigate (i.e., take actions) through partially-observable 3D environments byfollowing step-by-step navigation instructions (e.g., ‘Exit the bedroom. Walk down the hall. Stop next to thebathroom sink. ’ ) [6,7,8]. Furthermore, researchers have developed agents that explore 3D environmentsto find objects given concise instructions that specify objects by category labels (e.g., ‘Find a table’ ,‘Finda bed’ ,‘Find a chair’ , etc.) [9, 10].7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Given the tremendous progress in each of these domains, we believe it is time to ask: what type of userinteraction should ultimately be supported by robots tasked with finding objects? Towards answeringthis question we identify three key desiderata:1) Specificity. Users should be able to request a specific instance of an object. For example, when askingfor a ‘laptop’ we often would like our‘laptop’ and not one belonging to someone else. Thus, as illustratedin Figure 1, users might specify distinguishing features such as ‘the laptop with stickers on it’ or‘thelaptop on the bed’ . This requires systems to differentiate intrinsic attributes (e.g., color, patterns, shape)and/or extrinsic properties (e.g., spatial location, relationship to other entities).2) Open-vocabulary. The set of objects should not be restricted – users should be able to request anyobject they require. Thus, systems must support an open vocabulary [11, 12, 13].3) Interactive. When providing instructions users may make ambiguous or even mistaken requests. Forexample, a user might request an ‘orange mug’ when multiple mugs of that color exist. In such cases,the request is underspecified and further interaction is required to provide additional details or corrections.To evaluate agents under these real-world conditions, we introduce a task and dataset. As shown in Figure 1,our task ( FindThis ) requires following natural language instructions (e.g., ‘Find my laptop. ’ ) to localize a spe-cific object in an indoor 3D environment such as a home or office. If the agent fails by finding an incorrectobject (as in Figure 1), the task continues with the user providing further details about the desired object (e.g.,‘No, mine has a sticker on it. ’ ). In other words, FindThis agents must engage with users in a multi-roundinteraction, interpreting open-vocabulary, natural language instructions to localize a desired object.In addition, we present the 3D Objects in Context ( 3DOC ) dataset, which is designed for large-scaleevaluation of FindThis agents in a zero-shot manner – i.e., the 3DOC dataset does not include a trainingsplit. The dataset consists of language instructions generated from dialog templates, paired with scenesconstructed using a diverse set of 3D objects from the Google Scanned Objects (GSO) dataset [ 14] andplaced in 3D scanned indoor environments from the Habitat-Matterport 3D (HM3D) dataset [15].To address the FindThis task, we propose a novel zero-shot approach ( GoFind ) to handle the challenges ofmulti-round interaction for fine-grained object localization. In our approach, we build an open-vocabulary3D scene representation using vision-and-language models (e.g., ViLD [ 16], OWL-ViT [ 17], CLIP [ 18]).Then, we query the representation with the help of a large language model (e.g., GPT-3 [ 19] or PaLM [ 20]),which identifies object attributes mentioned in the dialog and uses visual common sense (i.e., knowledgeabout visual similarity) to propose background objects that the embodied agent (i.e., robot) should ignore.Finally, we validate our approach on both 3DOC and on a real robot operating in a real-world officeenvironment using complex fine-grained instructions such as ‘Find a bowl full of cereal’ or‘Find an upsidedown mug’ . We find that our proposed approach generalizes well to the real-world setting, performingas well or better than the performance observed in simulation.2 Related Work2D and 3D Referring Expression Tasks. Aligning referential language (e.g., ‘the mug on the table’ ) withimage or spatial regions has been studied in both 2D [ 2] and 3D [ 3,4,5] settings. A common framingof this task is a game between two players in which both players fully observe a scene, then one describesan object that the second player must identify [ 2]. The 3D versions of this game (e.g., ReferIt3D [ 3],ScanRefer [ 4], Refer360 [ 5], REVERIE [ 21], SOON [ 22]) require fine-grained disambiguation of objectsbased on intrinsic and extrinsic attributes. Our proposed task ( FindThis ) expands on this prior work by liftingthis problem into a real-world setting in which target objects are described through a multi-round dialogwith a user and an embodied agent must explore 3D environments, observing objects at different distancesand viewing angles and ignoring distractor objects that share some but not all properties with the target.Language-Conditioned Visual Navigation Tasks. Language-conditioned navigation in 3D environmentsis often studied through instruction following [ 23,24,8,25,26,27] or object-goal navigation (ObjectNav)task [ 9,10]. This work falls into ObjectNav, which prior work has focused on a setting in which objectsare only described by a category label (e.g., ‘sink’ ,‘table’ , etc.) drawn from a closed set of categories,and finding any instance of an object with that label is defined as success[ 10,28,29,30,31,32,33]. Bycontrast, we study an alternative setting in which the user prefers a specific instance of an object (e.g., ‘theorange mug’ ) drawn from an open set of categories that are not predetermined. Recent work has studieda similar open-vocabulary setting [ 11,12,13]. Closely related, [ 12] propose an open-vocabulary object2Open-Vocabulary 3D Scene RepresentationFine-Grained Object Localizationdef go_find_it(): mugs = find_objects( name='mug', distractors=[ 'plate', 'bowl' ], ) mugs = sort_objects( objects=mugs, attribute='orange' ) return mugs[0]LLM instruction parsingFind my orange mug. Localization APIfind_objects()sort_objects()is_close()Open-Vocabulary Object DetectorRoIsRGB ImagesFrontier ExplorationExample Object CollectionAgent ObservationsMulti-Round DialogR1: Find the striped towel. R2: No, my towel is also red.IntrinsicR1: Find the towel on the chair. R2: No, my towel is striped.ExtrinsicWithin-Class DistractorsTargetOut-of-Class DistractorsEnvironment 1Environment 2Diversity DatasetFigure 2: Examples from the 3DOC dataset. 3DOC is designed to benchmark the ability of agents to parsemulti-round open-vocabulary dialog to identify objects in a 3D scene. Compared with other object-in-scenedatasets, 3DOC is designed to include more challenging within-class and out-of-class distractors andincludes more diversity in objects and scenes.navigation task coined language-driven zero-shot object navigation (L-ZSON). The primary differencebetween L-ZSON and the task proposed in this work ( FindThis ) is that we consider a multi-round dialog,which is often required for disambiguating similar object instances.3D Semantic Mapping. Numerous methods have been proposed for building semantic maps of 3Denvironments. One line of work adds semantic features to traditional SLAM methods. Alternatively, severalworks propose using object detectors or semantic segmentation techniques to build semantic maps. Akey limitation of these methods is that the detectors and segmentors are pre-trained on a fixed set of objectcategories. This makes such methods inapplicable to the open-vocabulary setting studied in this work.Recent methods have addressed this challenge using recently proposed open-vocabulary vision-and-language models (e.g., CLIP [ 18], ALIGN [ 34], BASIC [ 35]) and object detectors and segmentors(e.g., ViLD [ 16], MDETR [ 36], OWL-ViT [ 17], LSeg [ 37]). Specifically, NLMap [ 38] uses a ViLDdetector and CLIP features to represent 3D environments. VLMaps [ 39] leverages LSeg [ 37], a modelthat projects 2D images into the CLIP representation space at the pixel-level, for mapping. And thebest version of CoW [ 12] uses the OWL-ViT detector. In all of these methods, objects are found usingqueries generated by a pre-trained text encoder. By contrast, in this work we consider more complex querymechanisms required to address the unique challenges in the FindThis task. Furthermore, we leveragea large language model (LLM) to provide common sense visual knowledge and decompose languageinstructions, improving object localization performance.3 The FindThis Task and the 3DOC DatasetOur goal is to design a task in which a user interacts with an embodied agent using natural languagewith an open-vocabulary to describe a specific object in a 3D scene for the agent to find. In this section,we define such a task (Section 3.1), evaluation metrics (Section 3.2), and present a dataset (Section 3.3)designed for large-scale evaluations of embodied agents on this task in a zero-shot manner (Figure 2).3.1 Task DefinitionInFindThis , for each episode ian embodied agent (a mobile robot) must localize a target object Ti(e.g.,‘a mug’ ) in a 3D environment Eithrough a multi-round dialog Diin which a user describes the object.Specifically, the agent is initialized at a starting location s0in the environment Ei, and must explore tofind the target object Ti. In other words, given descriptions of an object, the agent must ‘go find this’ .By nature, language instructions can be ambiguous and underspecify the task. For example, a user mightask for their specific ‘mug’ when multiple mugs are present in the scene. In such cases, users must provideadditional information to resolve ambiguities. Specifically, the additional information may describe3intrinsic attributes of the object (e.g., ‘Find my orange mug’ ) orextrinsic attributes such as the spatialrelationship with another object in the scene (e.g., ‘Find the mug next to the sink’ ). A FindThis dialogmay include both types of disambiguating information.As illustrated in Figure 1, we consider multimodal dialog in which users provide natural languageinstructions such as ‘Find my laptop please’ or‘No, mine has a sticker on it’ , and the agent responds withan image of a candidate object Cj(e.g., an image of a laptop). Formally, a sequence of these user instructionsIjand agent responses Cjcompose a M-round dialog Di=[I1;C1;:::;IM;CM]. In this work, we considerM2f1;2g. An episode is considered successful if the candidate object matches the target (i.e., Tiis inCj).3.2 Evaluation MetricsWe use two variations of success rate (SR) to measure performance: Top-1 SR and Top-5 SR. Top-1 SR (astandard variant) corresponds with the percentage of episodes in which the agent presents the user with thecorrect candidate object (i.e., Tiis inCj) within Mrounds of dialog. Top-5 SR considers a setting in whichthe agent can present five candidates in each round. If the target object is within the set of candidates,the episode is considered a success in terms of Top-5 SR. A target Tiis considered within the image cropCjif the ground truth segmentation mask of Ticovers at least 10% of the cropped image Cj.3.3 Evaluation DatasetThis section describes the 3D Objects in Context ( 3DOC )dataset, designed for evaluating agents ontheFindThis task. 3DOC is designed for zero-shot evaluation, so training episodes are not provided.We instantiate the task in the Habitat [ 40,41] simulator by placing 3D scanned objects from the GoogleScanned Objects (GSO) dataset [ 14] into photorealistic 3D environments from the Habitat-Matterport 3D(HM3D) dataset [ 15]. Examples are shown in Figure 2 and statistics summarized in Table 2 in Appendix B.Scene Construction. We create scenes by procedurally dropping the 3D objects on surfaces within theHM3D [ 15] environments. Specifically, for each object we annotate if the object typically appears onelevated surfaces (e.g., a table, desk, counter, etc.), on the floor, or on both surface types. Then, we place theobject above an appropriate surface location, drop the object using the simulator’s physics engine, and verifythat the object stably landed in close proximity (i.e., within 0.05m) of the desired surface location. Thisprocedure ensures that objects are placed in semantically reasonable locations that might represent a messyhouse (e.g., with toys randomly placed on the floor). A similar approach for determining reasonable objectplacements was used to construct an evaluation dataset for an open-vocabulary object navigation task in [ 42].A key challenge in FindThis is disambiguating distractor objects that are similar to the target object insome way. To include such scenarios in 3DOC , we annotate the object category and intrinsic attributes for72 objects from the GSO dataset [ 14]. We use these annotations to curate 50 object collections composedof two different types of distractors. Specifically, we sample up to 4 objects from a target object categorywith different intrinsic attributes and up to 4 objects from other categories that may share intrinsic attributeswith the target objects. Thus, for a given target object, agents must disambiguate both within-class andoutside-of-class distractors to solve a FindThis episode. An example object collection is shown in Figure 2.Finally, we place objects from each of the 50 object collections into 10 different HM3D [ 15] validationenvironments, which results in 500 unique scene layouts in 100 different HM3D environments.Intrinsic Attribute Instructions. Given a target object from an object collection, we create aninitial intrinsic attribute instruction using the template: Find the <attr 1> <object> , where<attr 1> corresponds to the most prominent intrinsic attribute of an object (e.g., its color or pattern) and<object> is a name for the object (e.g., ‘mug’ or‘coffee mug’ ). In subsequent rounds of interaction,a user response is synthesized using secondary intrinsic attributes ( <attr 2> ) with the template: No,my <object> is also <attr 2> . In total, we generate 1,713 unique episodes, for 10 differentobject categories and 53 unique attribute mentions to form the intrinsic attribute split of the 3DOC dataset.Extrinsic Attribute Instructions. Initial extrinsic attribute instructions are generated with the template:Find the <object> <relation> <ref> , where <object> is the target, <ref> is anobject in proximity of the target, and <relation> is drawn from the set: fon the, in the,next to the, near the g. An example extrinsic attribute instruction is shown in Figure 2. Inmulti-round dialog for extrinsic attribute episodes, subsequent rounds use intrinsic attributes for furtherdifferentiation with the template: No, my <object> is <attr> , where <attr> is an attributeof the target. In total, the extrinsic attribute split of 3DOC contains 50 unique episodes.4Open-Vocabulary 3D Scene RepresentationFine-Grained Object Localizationdef go_find_it(): mugs = find_objects( name='mug', distractors=[ 'plate', 'bowl' ], ) mugs = sort_objects( objects=mugs, attribute='orange' ) return mugs[0]LLM Instruction ParsingFind my orange mug. Localization APIfind_objects()sort_objects()is_close()Open-Vocabulary Object DetectorRoIsRGB ImagesFrontier ExplorationExample Object CollectionAgent ObservationsMulti-Round DialogR1: Find the striped towel. R2: No, my towel is also red.IntrinsicR1: Find the towel on the chair. R2: No, my towel is striped.ExtrinsicWithin-Class DistractorsTargetOut-of-Class DistractorsEnvironment 1Environment 2Diversity DatasetFigure 3: Overview of the GoFind algorithm. First, we use frontier exploration and class-agnostic object detectionto obtain RoIs. CLIP [ 18] and ViLD [ 16] features are extracted from each RoI to establish an open-vocabulary 3Dscene representation. Given a natural language query, we use an LLM to parse the target (‘mug’), propose distractors(e.g., ‘plate’, ’bowl’, etc.), and call functions from the localization API (operating on the open-vocabulary 3D scenerepresentation) to locate the object.Note, instruction templates are only used for generating a large-scale evaluation dataset, while our approachcan handle open-vocabulary user interactions as demonstrated in real-world experiments (Section 5.3).4 ApproachThis section describes our approach (Figure 3) for multi-round, fine-grained object localization. In ourapproach, an agent first explores new environments to build a 3D semantic representation of the scene(Section 4.1). Then, an LLM with access to a fine-grained object localization API (Section 4.2) parsesuser interactions to propose objects to the user in a multi-round dialog (Section 4.3).4.1 Open-Vocabulary 3D Scene RepresentationAs illustrated in Figure 3, our agent first explores the environment using frontier exploration [ 43]. Ateach timestep t, RGB observations are processed with an open-vocabulary object detector (ViLD [ 16]or OWL-ViT [ 17]) to produce regions-of-interest (RoIs). For each RoI, we calculate a 3D bounding boxby projecting pixel locations corresponding to the object’s predicted 2D segmentation mask (ViLD) or2D bounding box (OWL-ViT) into 3D space using the depth observation and camera matrices R. Then,we use the minimum and maximum projected 3D locations to define an RoI’s 3D bounding box. For eachRoI, we compute a CLIP representation using the CLIP visual encoder CLIP v, which is used in additionto the RoI features produced by the object detector. As a result, the 3D environment is represented asa list of RoIs, where each RoI is represented by a 3D bounding box and 2 types of semantic features (CLIPplus ViLD or OWL-ViT). Note: because the object detector is run independently at each timestep, multipleRoIs might correspond to different views of the same object.4.2 Fine-Grained Object Localization APITo perform fine-grained object localization on a 3D semantic scene representation, we need a mechanism toanswer three basic questions: (1) Does object Xexist? (2) Does object Xhave the intrinsic attribute Y?and (3) Does object Xhave an extrinsic relationship with another object Z? In this work, we design simpleAPI calls that operate on the open-vocabulary scene representation (Section 4.1) to answer each question.Specifically, we frame the questions as a ranking problem where the goal is to sort the list of RoIs. Then,we design algorithms to answer each question that assign each RoI a score between 0and1. In Section 4.3,we demonstrate how a LLM can compose these API calls in response to multi-round FindThis user queries.1.find objects(obj, distractors) Given an object name obj (e.g., ‘mug’ ), our goal isto filter RoIs that do not contain an instance of the requested object, and then sort the remaining RoIsbased on a confidence score. Open-vocabulary object detectors (e.g., ViLD [ 16] and OWL-ViT [ 17]),score RoIs by encoding object names obj with a text encoder ( CLIP t) and calculating a similarity scorewith the visual features for each RoI. These scores can be used to sort the RoIs and filter ones with scoresbelow a threshold t. In this work, we dynamically calculate tfor each RoI based on the similarity scorewith a set of background categories. If an RoI is more similar to a background category than the desiredobject obj it is discarded. We generate background categories on-the-fly by asking an LLM to answerthe question, what objects look similar to desired foreground object obj?2.sort objects(attribute) When an intrinsic attribute attr is provided, the list of RoImust be sorted according to both the object category obj and attribute attr to return an object that5M=1 rounds of interaction M=2 rounds of interaction# method intrinsic extrinsic averageintrinsic +intrinsicextrinsic +intrinsicaveragetop-1 top-5 top-1 top-5 top-1 top-5 top-1 top-5 top-1 top-5 top-1 top-51 ViLD (no parsing) [38] 16.7 28.8 10.5 26.3 13.6 27.6 23.9 37.6 28.9 44.7 26.4 41.22 ViLD + basic parsing 23.8 37.3 18.4 26.3 21.1 31.8 29.6 43.9 23.7 44.7 26.6 44.33 ViLD + ours 27.8 40.7 21.1 31.6 24.5 36.2 35.0 47.2 31.6 39.5 33.3 43.44 ViLD+CLIP (no parsing) [38] 33.4 44.3 7.9 21.1 20.6 32.7 41.2 50.4 36.8 52.6 39.0 51.55 ViLD+CLIP + basic parsing 38.7 48.5 13.2 28.9 26.0 38.7 45.0 53.3 31.6 42.1 38.3 47.76 ViLD+CLIP + ours 40.3 48.6 47.4 55.3 43.8 52.0 46.4 52.7 71.1 73.7 58.8 63.27 OWL-ViT (no parsing) [12]27.6 38.3 21.1 34.2 24.4 36.3 32.2 44.2 26.3 44.7 29.2 44.58 OWL-ViT + basic parsing 29.2 41.3 18.4 28.9 23.8 35.1 35.0 46.2 36.8 60.5 35.9 53.49 OWL-ViT + ours 30.2 42.2 36.8 44.7 33.5 43.5 34.8 46.4 57.9 63.2 46.4 54.8Table 1: Main Results. We show results on the 3DOC benchmark by applying our proposed approachon top of three different semantic scene representations: ViLD, ViLD+CLIP , and OWL-ViT. We find ourmethod with ViLD + CLIP performs the best. Both parsing and delta vector contributes to improvement ofthe performance. (our implementation of CoW [ 12] in a pre-explored setting instead of object navigationin novel environment)meets both criteria. To facilitate this, we propose using attribute delta vectors to generate scores for intrinsicattribute queries. Specifically, we define the delta vector as the difference in the CLIP text embedding spacebetween the attribute conditioned object category ( attr+obj ) and the generic object category ( obj):D(attr ;obj)=CLIP t(attr+obj )CLIP t(obj) (1)Then, we use the magnitude of the projection of the visual features for each RoI onto the delta vectoras an attribute score. Finally, we add the attribute score to the original object score produced by thefind objects() method to calculate a score for each RoI.3.isclose(obj1, obj2) Given an language query expressing an extrinsic relationship betweenobjects (e.g., ‘Find my mug next to the sink’ ) a minimum requirement is the two objects ( ‘mug’ and‘sink’ )are in close proximity. We mechanize this requirement by comparing the minimum distance betweena pair of 3D bounding boxes with a fixed threshold (0.5m in our experiments). Future work may exploreadditional API calls that capture more refined spatial relationships such as ‘left of’ or ‘behind’.4.3 Instruction ParsingA key challenge in the FindThis task is parsing the multi-round dialog with a user. While the 3DOCuses templated language, a FindThis system should be able to handle arbitrary inputs, reflecting theopen-vocabulary motivation for the task. As shown in Figure 3, in this work, we use a LLM with few shotprompting to translate multi-round user interactions into a gofind it() function that uses methodsfrom the localization API (Section 4.2) to return a candidate object Cjto the user. Specifically, we usea small number of example dialog-to-function translations in a few-shot prompt to enable this capability.Further details including the prompt used in our experiments is presented in Appendix A.5 ExperimentsIn this section, we evaluate our approach in both simulation using our 3DOC dataset (Section 5.2) andon a robot in a real-world kitchen in an office environment (Section 5.3).5.1 Experimental SetupOccupancy Map. Our experiments assume that agents are provided with a top-down 2D occupancy mapthat indicates navigable regions in the environment (Fig. 3). In simulation, the occupancy map is directlygenerated by the Habitat simulator. However, in our real-world experiments only a coarse occupancy map(similar to an empty floor plan) is provided. Specifically, the real-world occupancy map only delineateswalls and other immovable structures (e.g., a kitchen island) but does not include movable objects suchas tables or chairs. The robot uses RGB-D and lidar sensors to update the occupancy map in real-time,which allows navigating to waypoints without collisions.Exploration Strategy. For all experiments, all agents (including baselines) use the occupancy map topre-explore the environment using frontier exploration [ 43]. Specifically, at each timestep, visible regions6RGB Image, 640x512Parsed attributes Foreground: 'mug'Distractors: ['cup', 'bowl', 'plate', 'glass', 'bottle', 'table', 'counter']Attribute: NoneParsed attributes Foreground: 'mug'Distractors: ['cup', 'bowl', 'plate', 'glass', 'bottle', 'table', 'counter']Attribute: upside downParsed attributes Foreground: 'bowl'Distractors: [‘cup', 'mug', 'plate', 'table', 'counter']Attribute: NoneParsed attributes Foreground: ’bowl'Distractors: ['cup', 'mug', 'plate', 'counter']Attribute: full of cerealRobot setupInstruction: Find a mug -> Find an upside down mugInstruction: Find a bowl -> Find a bowl full of cerealFigure 4: Real robot qualitative results. We implemented our approach on a mobile robot and showqualitative results in which our algorithm guides the robot through the environment to find objects specifiedby complex fine-grained language, e.g., ‘an upside down mug’ or ‘a bowl full of cereal’.within a fixed radius of the agent are marked as ‘explored.’ Then, the agent selects the nearest waypointalong the boundary (or frontier) of the explored area to navigate to next. The agent updates the exploredregions along the way and stops exploring when all of the navigable regions have been marked ‘explored.’Baselines. We implement variations of our method to compare with two state-of-the-art methods fromprior work: NLMap [ 38] and CoW [ 12]. Both methods use frontier exploration [ 43], an open-vocabularyobject detector (e.g., ViLD [ 16] or OWL-ViT [ 17]) to build a scene representation, and a frozen textencoder to query the representation.NLMap [38] represents the scene as a list of RoIs produced by ViLD’s class-agnostic regionproposal network. Each RoI is represented with positional information (i.e., object location andsize) and visual features from ViLD [ 16] and CLIP [ 18]. A frozen CLIP text encoder is usedto query the representation.CoW [12] uses the OWL-ViT [ 17] detector to process images at each timestep during exploration.If the similarity score between a detected region and the query text exceeds a threshold, the CoWagent navigates to the 3D location corresponding with that RoI.These methods as well as other prior works (e.g., VLMaps [ 39]) use a simple mechanism to query scenerepresentations: cosine similarity between image and text features from a vision-and-language model.Specifically, language queries (e.g., ‘Find the orange mug. ’ ) are feed directly to a CLIP text encoder –potentially, after some basic parsing such as removing the phrase ‘Find the’ . By contrast, our approachuses an LLM to parse instructions into parts (e.g., ‘orange’ and‘mug’ ) that are used to query for objects.To compare with these works, we implement two variations of our method: 1) no parsing where the fullmulti-round dialog is used as a text query and 2) basic parsing where the dialog is converted into a conciseattribute-object phrase (e.g., ‘orange mug’ ) or referring expression (e.g., ‘mug next the sink’ ). Furthermore,we experiment with creating scene representations using three different feature sets: ViLD [ 16] features,ViLD [16] and CLIP [18] features ( `a la NLMap [38]), and OWL-ViT features ( `a la CoW [12]).Implementation Details. We conduct experiments in simulation on the 1,763 total (1,713 intrinsic and50 extrinsic) evaluation episodes in the 3DOC dataset. We use the code-davinci-002 version GPT-3 [19] for experiments in simulation and a 540B parameter model from PaLM for real-world experiments.Agent Embodiment. We simulate an agent with an embodiment similar to a mobile manipulation robotproduced by Everyday Robots (shown in Figure 1). Specifically, the agent has a height of 1.25m with anRGB-D camera with a 90horizontal field-of-view (FOV) and a sensor resolution of 640480. The camerais placed at the top of the agent. We assume the agent has access to an occupancy map indicating navigableareas in the environment. In real-world settings, such occupancy maps can be pre-collected and then updatedon-the-fly using depth and/or lidar sensors. We simulate a discrete action space in which the agent can moveforward 0.25m and turn left or right by 10, where the turn actions are in-place rotations of the agent’s base.5.2 Results on 3DOCTable 1 shows results on the 3DOC benchmark by applying our proposed approach on top of three differentsemantic scene representations: ViLD, ViLD+CLIP , and OWL-ViT. We find that in all cases our approach7(rows 3, 6, and 9) improves top-1 success rate (SR) for both a single round ( k=1) and two rounds ofinteraction ( k=2). Specifically, with ViLD+CLIP our approach improves average top-1 success rate (SR)by+17.8 points (26.0!43.8) for k=1over basic parsing (row 5 vs. 6). With two rounds of interaction(k=2), we observe similar gains in average top-1 SR of +20.5 points (38.3!58.8 from row 5 to 6).For intrinsic attribute differentiation, we find that basic instruction parsing (e.g., converting the multi-rounddialog ‘Find my wicker basket. No, my basket is also white. ’ to‘white and wicker basket’ ) consistently leadsto improvements (row 1 vs. 2, 4 vs. 5, and 7 vs. 8). For example, with the ViLD+CLIP representation basicparsing improves intrinsic top-1 SR by +5.3 points (33.4!38.7) for k=1and+3.8 points (41.2!45.0)fork=2(row 5 vs. 6). These results indicate that these scene representations are sensitive to spurious wordsunrelated to attributes or objects, and highlight the need for instruction parsing for multi-round interaction.Furthermore, we find that our approach of using CLIP delta vectors for intrinsic attribute differentiationconsistently improves top-1 SR (row 2 vs. 3, 5 vs. 6, and 8 vs. 9). For example, with the ViLD+CLIPrepresentation intrinsic top-1 SR improves by +1.6 points for k=1(38.7!40.3) and +1.4 points k=2(45.0!46.4) (row 5 vs. 6). With ViLD representations, we observe larger improvements in top-1 SR of+4.0 points (23.8!27.8) for k=1and+5.4 points (29.6!35.0) for k=2(row 2 vs. 3). We emphasizethat these scene representations are built on CLIP [ 18], which was trained to differentiate intrinsic attributementions that are likely contained in the WIT dataset [ 18]. Despite such training, our proposed approachsubstantially improves over this strong baseline.For extrinsic differentiation, we find that our proposed approach of separating instructions (e.g., ‘Find themug on the table. ’ ) into its component parts and enforcing spatial constraints (i.e., proximity) significantlyimproves performance. For instance, with ViLD+CLIP representations, extrinsic top-1 SR improves by+34.2 points (13.2!47.4) for k=1(row 5 vs. 6). We observe further gains for multi-round dialogs ( k=2)when an intrinsic attribute is provided as an additional clue (e.g., ‘Find the mug on the table. No, my mugis white and yellow. ’ ) of+39.5 points (31.6!71.1). These results highlight the benefits of composingmultiple capabilities (intrinsic and extrinsic) through the API introduce in Section 4.2 that build on topof open-vocabulary scene representations for fine-grained object localization.5.3 Qualitative Results on a Real RobotAs a last step, we deployed our method on a mobile robot to qualitatively show generalization to a real worldsetting. As seen in Fig 4, the robot is a mobile manipulator, deployed in an office kitchen environment, witha640512RGB camera image observation. In each of our runs, the robot successful identifies a ‘blackupside-down mug’ and a ‘bowl full of cereal’ through two rounds of interaction. Specifically, in the firstround the robot find a different ‘mug’ or ’bowl’, but correctly identifies the target after a second round ofinteraction. Owing to the scale and generality of the CLIP-based vision backbone, we find this real worldapplication is high performing with no adaptions to the algorithm. See additional details in Appendix C.6 DiscussionWe proposed a novel algorithm called GoFind for fine-grained object localization in complex indoorenvironments using natural language queries and visual attributes. We addressed the challenge ofdisambiguating and locating the particular object instance desired through a dialog with the user,and exploited visual attributes of the object that may be intrinsic or extrinsic, expressed in an openvocabulary. We demonstrated that the visual common sense learned by large language models enablesfine-grained object localization and attribute differentiation in a zero-shot manner. We also provided anew visio-linguistic dataset, 3D Objects in Context ( 3DOC ), for evaluating agents on the FindThis task.Finally, we validated our approach on a real robot operating in an unstructured physical office environmentusing complex fine-grained language instructions.Limitations. Our proposed method showed improvements in performance over existing methods onbenchmark datasets and demonstrated qualitative results on a mobile robot. However, there are stilllimitations to our approach. A key limitation is that we use a pipeline of region proposal detection andfeature extraction, which might miss some objects, causing irreversible misdetections. Another limitationis that we use frontier exploration to do exhaustive search in all our experiments, while some dynamicstopping algorithms could potentially provide a shorter path length. We hope that our work provide asolid baseline and a dataset for further research in this area and lead to the development of more robustand efficient algorithms for natural language object disambiguation in complex indoor environments.8AcknowledgmentsThe Georgia Tech effort was supported in part by ONR YIP and ARO PECASE. The views and conclusionscontained herein are those of the authors and should not be interpreted as necessarily representing theofficial policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.References[1]T. Winograd. Procedures as a representation for data in a computer program for understandingnatural language. Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGE PROJECTMAC, 1971.[2]S. Kazemzadeh, V . Ordonez, M. Matten, and T. Berg. Referitgame: Referring to objects inphotographs of natural scenes. In Proceedings of the 2014 conference on empirical methods innatural language processing (EMNLP) , pages 787–798, 2014.[3]P . Achlioptas, A. Abdelreheem, F. Xia, M. Elhoseiny, and L. J. Guibas. ReferIt3D: Neural listenersfor fine-grained 3d object identification in real-world scenes. In 16th European Conference onComputer Vision (ECCV) , 2020.[4]D. Z. Chen, A. X. Chang, and M. Nießner. Scanrefer: 3d object localization in rgb-d scans usingnatural language. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK,August 23–28, 2020, Proceedings, Part XX , pages 202–221. Springer, 2020.[5]V . Cirik, T. Berg-Kirkpatrick, and L.-P . Morency. Refer360: A referring expressionrecognition dataset in 360images. In Proceedings of the 58th Annual Meeting ofthe Association for Computational Linguistics , pages 7189–7202, Online, July 2020.Association for Computational Linguistics. doi:10.18653/v1/2020.acl-main.644. URLhttps://aclanthology.org/2020.acl-main.644 .[6]P . Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S ̈underhauf, I. Reid, S. Gould, andA. van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigationinstructions in real environments. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR) , June 2018.[7]A. Ku, P . Anderson, R. Patel, E. Ie, and J. Baldridge. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. arXiv preprint arXiv:2010.07954 , 2020.[8]H. Chen, A. Suhr, D. Misra, N. Snavely, and Y . Artzi. Touchdown: Natural language navigationand spatial reasoning in visual street environments. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 12538–12547, 2019.[9]P . Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V . Koltun, J. Kosecka, J. Malik,R. Mottaghi, M. Savva, et al. On evaluation of embodied navigation agents. arXiv preprintarXiv:1807.06757 , 2018.[10] D. Batra, A. Gokaslan, A. Kembhavi, O. Maksymets, R. Mottaghi, M. Savva, A. Toshev, andE. Wijmans. Objectnav revisited: On evaluation of embodied agents navigating to objects. arXivpreprint arXiv:2006.13171 , 2020.[11] Z. Al-Halah, S. K. Ramakrishnan, and K. Grauman. Zero experience required: Plug & play modulartransfer learning for semantic visual navigation. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17031–17041, 2022.[12] S. Y . Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Cows on pasture: Baselines andbenchmarks for language-driven zero-shot object navigation. arXiv preprint arXiv:2203.10421 , 2022.[13] A. Majumdar, G. Aggarwal, B. S. Devnani, J. Hoffman, and D. Batra. Zson: Zero-shot object-goalnavigation using multimodal goal embeddings. In Advances in Neural Information ProcessingSystems , 2022.9[14] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh, andV . V anhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items. InICRA , 2022.[15] S. K. Ramakrishnan, A. Gokaslan, E. Wijmans, O. Maksymets, A. Clegg, J. M. Turner, E. Under-sander, W. Galuba, A. Westbury, A. X. Chang, M. Savva, Y . Zhao, and D. Batra. Habitat-matterport3d dataset (HM3d): 1000 large-scale 3d environments for embodied AI. In Thirty-fifth Conferenceon Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. URLhttps://openreview.net/forum?id=-v4OuqNs5P .[16] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision and languageknowledge distillation. arXiv preprint arXiv:2104.13921 , 2021.[17] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Mahendran,A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detection with visiontransformers. arXiv preprint arXiv:2205.06230 , 2022.[18] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P . Mishkin,J. Clark, et al. Learning transferable visual models from natural language supervision. In Internationalconference on machine learning , pages 8748–8763. PMLR, 2021.[19] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P . Dhariwal, A. Neelakantan,P . Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child,A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei.Language models are few-shot learners. In Advances in Neural Information ProcessingSystems , 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf .[20] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P . Barham, H. W. Chung,C. Sutton, S. Gehrmann, P . Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P . Barnes, Y . Tay,N. Shazeer, V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard,G. Gur-Ari, P . Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V . Misra,K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi,D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira,R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei,K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modelingwith pathways. arXiv preprint arXiv:2204.02311 , 2022.[21] Y . Qi, Q. Wu, P . Anderson, X. Wang, W. Y . Wang, C. Shen, and A. v. d. Hengel. Reverie: Remoteembodied visual referring expression in real indoor environments. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 9982–9991, 2020.[22] F. Zhu, X. Liang, Y . Zhu, Q. Y u, X. Chang, and X. Liang. Soon: Scenario oriented object navigationwith graph-based exploration. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 12689–12699, 2021.[23] P . Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S ̈underhauf, I. Reid, S. Gould, and A. V anDen Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructionsin real environments. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 3674–3683, 2018.[24] J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer. Vision-and-dialog navigation. In arXiv ,2019.[25] A. Majumdar, A. Shrivastava, S. Lee, P . Anderson, D. Parikh, and D. Batra. Improving vision-and-language navigation with image-text pairs from the web. In Computer Vision–ECCV 2020: 16thEuropean Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16 , pages 259–274.Springer, 2020.[26] J. Krantz, E. Wijmans, A. Majumdar, D. Batra, and S. Lee. Beyond the nav-graph: Vision-and-language navigation in continuous environments. In Computer Vision–ECCV 2020: 16th EuropeanConference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVIII 16 , pages 104–120.Springer, 2020.10[27] D. Shah, B. Osinski, B. Ichter, and S. Levine. Lm-nav: Robotic navigation with large pre-trainedmodels of language, vision, and action. arXiv preprint arXiv:2207.04429 , 2022.[28] D. S. Chaplot, D. P . Gandhi, A. Gupta, and R. R. Salakhutdinov. Object goal navigation usinggoal-oriented semantic exploration. Advances in Neural Information Processing Systems , 33:4247–4258, 2020.[29] A. Wahid, A. Stone, K. Chen, B. Ichter, and A. Toshev. Learning object-conditioned explorationusing distributed soft actor critic. In Conference on Robot Learning , pages 1684–1695. PMLR, 2021.[30] J. Y e, D. Batra, A. Das, and E. Wijmans. Auxiliary tasks and exploration enable objectgoal navigation.InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 16117–16126,2021.[31] B. Mayo, T. Hazan, and A. Tal. Visual navigation with spatial attention. In Proceedings of theIEEE/CVF conference on computer vision and pattern recognition , pages 16898–16907, 2021.[32] R. Ramrakhya, E. Undersander, D. Batra, and A. Das. Habitat-web: Learning embodied object-searchstrategies from human demonstrations at scale. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 5173–5183, 2022.[33] S. K. Ramakrishnan, D. S. Chaplot, Z. Al-Halah, J. Malik, and K. Grauman. Poni: Potential functionsfor objectgoal navigation with interaction-free learning. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 18890–18900, 2022.[34] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, Q. Le, Y .-H. Sung, Z. Li, and T. Duerig.Scaling up visual and vision-language representation learning with noisy text supervision. InInternational Conference on Machine Learning , pages 4904–4916. PMLR, 2021.[35] H. Pham, Z. Dai, G. Ghiasi, K. Kawaguchi, H. Liu, A. W. Y u, J. Y u, Y .-T. Chen, M.-T. Luong, Y . Wu,et al. Combined scaling for open-vocabulary image classification. arXiv e-prints , pages arXiv–2111,2021.[36] A. Kamath, M. Singh, Y . LeCun, G. Synnaeve, I. Misra, and N. Carion. Mdetr-modulated detectionfor end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conferenceon Computer Vision , pages 1780–1790, 2021.[37] B. Li, K. Q. Weinberger, S. Belongie, V . Koltun, and R. Ranftl. Language-driven semanticsegmentation. arXiv preprint arXiv:2201.03546 , 2022.[38] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler.Open-vocabulary queryable scene representations for real world planning. arXiv preprintarXiv:2209.09874 , 2022.[39] C. Huang, O. Mees, A. Zeng, and W. Burgard. Visual language maps for robot navigation. arXivpreprint arXiv:2210.05714 , 2022.[40] M. Savva, A. Kadian, O. Maksymets, Y . Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V . Koltun,J. Malik, D. Parikh, and D. Batra. Habitat: A Platform for Embodied AI Research. In ICCV , 2019.[41] A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y . Zhao, J. Turner, N. Maestre, M. Mukadam,D. Chaplot, O. Maksymets, A. Gokaslan, V . V ondrus, S. Dharur, F. Meier, W. Galuba, A. Chang,Z. Kira, V . Koltun, J. Malik, M. Savva, and D. Batra. Habitat 2.0: Training home assistants torearrange their habitat. In NeurIPS , 2021.[42] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. V anderBilt, L. Schmidt, K. Ehsani,A. Kembhavi, and A. Farhadi. Objaverse: A universe of annotated 3d objects. arXiv preprintarXiv:2212.08051 , 2022.[43] B. Y amauchi. A frontier-based approach for autonomous exploration. In Proceedings 1997 IEEE In-ternational Symposium on Computational Intelligence in Robotics and Automation CIRA’97. ’TowardsNew Computational Principles for Robotics and Automation’ , pages 146–151. IEEE, 1997.11A Few-Shot PromptWe use the few-shot prompt shown in Listing 1 for the 3DOC experiments in Section 5.2. The prompt usesPython syntax, and begins with an “import” statement that indicates functions available from the fine-grainedobject localization API (Section 4.2). Users queries are converted to lower case, the ending punctuation isremoved, and prepended with a #(which represents a comment in Python). Then, the preprocessed queryis added to the few-shot prompt after an empty line. The modified prompt is then processed with an LLM.For multi-round dialog, additional interactions or corrections are added after the response from the LLMwithout an empty line separator (as demonstrated in the second and third few-shot examples).from utils import find_objects, sort_objects, is_close# find the wooden chairdef go_find_it():chairs =find_objects(’chair’, distractors=[’sofa’, ’bench’, ’stool’, ’desk’])chairs = sort_objects(chairs, attributes=[’wooden’])return chairs[0]# find a backpack with white polka dots on itdef go_find_it():backpacks= find_objects(’backpack’, distractors=[’pillow’, ’jacket’, ’shirt’])backpacks = sort_objects(backpacks, attributes=[’white polka dots’])return backpacks[0]# no, my backpack is also yellowdef go_find_it():backpacks= find_objects(’backpack’, distractors=[’pillow’, ’jacket’, ’shirt’])backpacks= sort_objects(backpacks, attributes=[’white polka dots’, ’yellow’])return backpacks[0]# find the pants on the dresserdef go_find_it():pants = find_objects(’pants’, distractors=[’shirt’, ’socks’, ’shoes’, ’dress’])dressers = find_objects(’dresser’, distractors=[’bookshelf’, ’bed’, ’desk’, ’chair’])pants = is_close(pants, dressers)return pants[0]# no, my pants are also reddef go_find_it():pants = find_objects(’pants’, distractors=[’shirt’, ’socks’, ’shoes’, ’dress’])pants = sort_objects(pants, attributes=[’red’])dressers = find_objects(’dresser’, distractors=[’bookshelf’, ’bed’, ’desk’, ’chair’])pants = is_close(pants, dressers)return pants[0]# find the apple next to the microwavedef go_find_it():apples = find_objects(’apple’, distractors=[’pear’, ’tomato’, ’orange’, ’bowl’])microwaves = find_objects(’microwave’, distractors=[’dishwasher’, ’sink’, ’refrigerator’,])apples = is_close(apples, microwaves)return apples[0]Listing 1: Few-shot prompt used for experiments in simulation.12Table 2: 3DOC StatisticsNum of Object Categories 10Num of Unique Objects 72Num of Unique Intrinsic Attributes 53Num of Unique Scenes 100Num of Unique Scene Layouts 500Num of Extrinsic Attribute Episodes 50Num of Intrinsic Attribute Episodes 1,713B3DOC StatisticsTable 2 provides additional details on the composition of the 3DOC dataset.C Qualitative ExamplesA video presenting qualitative examples of our GoFind agent executing the FindThis task in both simulationand in the real-world is provided in the supplemental material.D Dataset and Code ReleaseThe3DOC dataset and source code to reproduce the results presented in Section 5.2 will be publiclyreleased.13 |
86aMPJn6hX9F | Stabilize to Act: Learning to Coordinate forBimanual ManipulationJennifer Grannen, Yilin Wu, Brandon Vu, Dorsa SadighStanford University, Stanford, CAjgrannen@stanford.eduAbstract: Key to rich, dexterous manipulation in the real world is the abilityto coordinate control across two hands. However, while the promise affordedby bimanual robotic systems is immense, constructing control policies for dualarm autonomous systems brings inherent difficulties. One such difficulty is thehigh-dimensionality of the bimanual action space, which adds complexity to bothmodel-based and data-driven methods. We counteract this challenge by drawinginspiration from humans to propose a novel role assignment framework: a stabi-lizing arm holds an object in place to simplify the environment while an actingarm executes the task. We instantiate this framework with BimanUal Dexterityfrom Stabilization (BUDS), which uses a learned restabilizing classifier to al-ternate between updating a learned stabilization position to keep the environmentunchanged, and accomplishing the task with an acting policy learned from demon-strations. We evaluate BUDS on four bimanual tasks of varying complexities onreal-world robots, such as zipping jackets and cutting vegetables. Given only 20demonstrations, BUDS achieves 76.9% task success across our task suite, andgeneralizes to out-of-distribution objects within a class with a 52.7% success rate.BUDS is 56.0% more successful than a unstructured baseline that instead learns aBC stabilizing policy due to the precision required of these complex tasks. Sup-plementary material and videos can be found at https://tinyurl.com/stabilizetoact.Keywords: Bimanual Manipulation, Learning from Demonstrations, DeformableObject Manipulation1 IntroductionBimanual coordination is pervasive, spanning household activities such as cutting food, surgicalskills such as suturing a wound, or industrial tasks such as connecting two cables. In robotics, theaddition of a second arm opens the door to a higher level of task complexity, but comes with anumber of control challenges. With a second arm, we have to reason about how to produce coordi-nated behavior in a higher dimensional action space, resulting in more computationally challenginglearning, planning, and optimization problems. The addition of a second arm also complicates datacollection—it requires teleoperating a robot with more degrees of freedom—which hinders our abil-ity to rely on methods that require expert bimanual demonstrations. To combat these challenges,we can draw inspiration from how humans tackle bimanual tasks—specifically alternating betweenusing one arm to stabilize parts of the environment, then using the other arm to actconditioned onthe stabilized state of the world.Alternating stabilizing and acting offers a significant gain over both model-based and data-drivenprior approaches for bimanual manipulation. Previous model-based techniques have proposed plan-ning algorithms for bimanual tasks such as collaborative transport or scooping [1, 2, 3], but re-quire hand-designed specialized primitives or follow predefined trajectories limiting their abilitiesto learn new skills or adapt. On another extreme, we turn to reinforcement learning (RL) tech-niques that do not need costly primitives. However, RL methods are notoriously data hungry and ahigh-dimensional bimanual action space further exacerbates this problem. While simulation-to-realtransfer techniques offer an appealing alternative [4, 5, 6, 7], a key component of bimanual tasks is7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.closed-chain contacts and high-force interactions (consider cutting or connecting cables), which arehard to simulate and widen the gap with reality [8, 9, 10]. A more promising data-driven approachis learning from demonstration. However, collecting high-dimensional bimanual demonstrationsis difficult as simultaneously controlling two high-degree freedom arms often requires specializedhardware or multiple operators [11, 12, 8, 13, 14]. The increased dimensionality of the action spacealso necessitates significantly more data, especially for more precise or dexterous tasks [15].Figure 1: BUDS: BimanUal Dexterity fromStabilization : BUDS is a bimanual manipu-lation framework that uses a novel stabilizingand acting role assignment to efficiently learnto coordinate. For the stabilizing role, BUDSlearns a Stabilizing Position Model (1) to pre-dict a point to hold stationary using a noncom-pliant controller (2). In this simplified envi-ronment, BUDS learns to act from single-armdemonstrations (3). Combined, these two ac-tions comprise a bimanual policy (4). Finally,at every timestep, BUDS’s Restabilizing Clas-sifier (5) predicts whether the stabilizing posi-tion is still effective or needs to be updated.Our insight about how humans iterate between sta-bilizing and acting presents a way to overcomethese challenges. In tasks such as plugging in aphone or cutting a steak, a stabilizing arm holdsan object (e.g. the phone or steak) stationary tosimplify the environment, making it easier for theacting arm to complete the task with high preci-sion. Factoring control across stabilizing and act-ing additionally offers benefits for data collection;the role-specific policy can be learned indepen-dently for each arm, bypassing the need for biman-ual demonstrations. Adjusting a stabilizing posi-tion iteratively as the acting arm progresses enableseven more expressive and generalizable behavior.For example, a fork should skewer a steak at differ-ent points depending on where the knife cuts.Thus, the key insight driving this work is that to en-able sample-efficient, generalizable bimanual ma-nipulation, we need two roles: a stabilizing armstabilizes an object to simplify the environment foranacting arm to perform the task.We propose BimanUal Dexterity from Stabilization(BUDS), a method that realizes this coordinationparadigm by decomposing the bimanual probleminto two single-arm problems: learning to stabilizeand learning to act. The stabilizing policy decideswhere to stabilize in the scene and when to adjust,while the acting arm learns to perform the task inthis simpler environment. For example when cut-ting a steak, our stabilizing policy learns where tohold a steak and when to adjust so the steak remainsstationary while the acting policy makes the cut.To learn where to stabilize, we use a vision-basedsystem that takes an environment image as inputand outputs a stabilization keypoint position. We then learn a restabilizing classifier that determinesfrom images when the stabilizing keypoint is no longer effective and needs to be updated. We deploythis stabilizing policy while collecting single-arm trajectory demonstrations for an acting policy tosidestep the need for a precise and expensive bimanual demonstration collection interface. Usingthese demonstrations, the acting arm learns a policy via imitation learning to accomplish the taskin this simplified, stationary environment. We demonstrate the efficacy of this paradigm on four di-verse, dexterous manipulation tasks on a physical UR16e dual-arm platform. BUDS achieves 76.9%success, and outperforms an unstructured baseline fully learned from expert trajectory demonstra-tions by 56.0%. Additionally, BUDS achieves 52.7% when generalizing to unseen objects of similarmorphology (e.g. transferring a cutting policy trained on jalape ̃nos to cutting zucchini and celery).Our contributions are: (1) A paradigm for learning bimanual policies with role assignments, wherethe stabilizing arm stabilizes the environment and reduces the non-stationarity present while anacting arm learns to perform a task allowing in a simpler learning setting, (2) A framework for2collecting bimanual demonstrations that bypasses the need for a dual-arm interface by collectingdemonstrations for the stabilizing and acting roles independently, and (3) A system, BUDS, thatinstantiates this paradigm to learn a centralized bimanual policy to perform a broad range of tasks.2 Related WorkIn this section, we describe the current data-driven and model-based methods available for bimanualmanipulation tasks, along with prior work using ideas of stabilizing for manipulation.Learning-based Bimanual Manipulation. A recurring challenge throughout bimanual manipu-lation is the high-dimensionality of the action space. This appears both in reinforcement learning(RL) and imitation learning (IL) works [16, 11, 17, 18, 19, 20, 21]. Prior multi-agent coordinationworks have considered shrinking the high-dimensionality of the problem by using a second agentstabilizing a latent intent [22], however learning a stabilizing policy and a latent intent mapping bothrequire a significant amount of data that is not realistic for physical robot manipulation tasks.RL methods for learning high-frequency bimanual control policies can require a large number ofsamples and many hours of robot time, which makes simulation to real policy transfer an appealingapproach [21, 7, 6]. However, sim-to-real approaches are limited to settings where the sim-to-real gap is small, which precludes many contact-rich bimanual tasks such as zipping a zipper orcutting food [8, 9, 10]. Instead, works in both RL and IL settings have proposed using parame-terized movement primitives to reduce the action space, and have achieved reasonable success ontasks such as opening a bottle and lifting a ball [17, 19, 20, 23, 24, 25, 26, 27, 28, 29, 15, 30].However, these movement primitives greatly limit the tasks achievable by the method as they oftenrequire costly demonstrations or labor-intensive hard-coded motions for each task-specific primi-tive. Additionally, learning from demonstrations in bimanual settings is difficult as teleoperatingtwo high-degree-freedom robots or collecting kinesthetic demonstrations on both arms simultane-ously is challenging and sometimes impossible for a single human and may require specializedhardware [16, 11, 31, 12, 8, 13]. Recent works have demonstrated more effective interfaces for datacollection in a bimanual setting, but these interfaces are limited to specific hardware instantiationsand would still require large amounts of expert data to learn a high dimensional policy [14]. To avoidthe need for expert bimanual demonstrations, we use a novel stabilizing paradigm to decouple thearms’ policies and learn a role-specific policy for each arm from single-arm demonstrations. Thisadded structure also brings down the dimensionality of the large action space in a task-agnostic way.Model-based Bimanual Manipulation. The majority of model-based bimanual manipulationmethods are limited to using planning and constraint solving methods to jointly move or hold alarge object [32, 33, 34, 35, 12, 2, 1, 36]. Bersch et al. [37] and Grannen et al. [3] present systemsusing a sequence of hard coded actions for folding a shirt and scooping food respectively. However,as tasks become more complex, the primitives required also become more unintuitive and costly tohand-design. We instead learn a control policy from single-arm demonstrations, avoiding the needfor labor-intensive hand-coded primitives while performing dexterous bimanual manipuation tasks.Stabilizing for Manipulation. Stabilizing and fixturing can yield large benefits in a manipulationcontext by providing additional steadiness for high precision tasks and unwieldy object interactions.Early works in industrial robotics have proposed planners for autonomous fixture placement thatreason about friction forces [38] or use CAD model designs [39] to add structure to the environment.More recent works have used additional fixture arms or vises to bootstrap sample efficiency [40]or avoid robot force and torque limits [41]. Similarly, Chen et al. [42] consider a collaborativesetting—an assistive robot arm reasons about forces to hold a board steady for a human to cut ordrill into. The addition of an assistive stabilizing role naturally points towards a bimanual setting,and indeed many bimanual manipulation works implicitly use a stabilizing role in their designs [23,11, 3, 21]. Holding food in place while cutting is, perhaps, an obvious application of stabilizing, andthis assistance is critical for overcoming the highly variable geometries and dynamics of food [43,44, 45]. While prior stabilizing works are limited as a task-specific systems, we propose a generalbimanual system that learns from demonstrations how to stabilize and act for a variety of tasks.33 Stabilizing for Bimanual TasksGiven a set of expert demonstrations, we aim to produce a bimanual policy for executing a varietyof manipulation tasks, such as capping a marker or zipping up a jacket. We formulate each bimanualtask as a sequential decision making problem defined by components (O,A). Each observation otcomprises an RGB image frame ft∈RH×W×3and the proprioceptive state of each arm pt∈R14.Ais the action space for the two robot arms containing 14 degrees of freedom (DoF) joint actions at.We define at= (ast, aat), where ast, aat∈R7are the stabilizing and acting arm actions respectively.We are in a model-free setting, and make no assumptions on the unknown transition dynamics.To perform these bimanual tasks, we use a bimanual manipulator operating in a workspace that isreachable by both arms, along with a standard (x, y, z )coordinate frame in this workspace. We usedepth cameras with known intrinsics and extrinsics, which allows us to obtain a mapping (fx, fy)inpixel space to a coordinate (x, y, z )in the workspace, which we later refer to as a keypoint.To learn our bimanual policies, we first assume access to a set of expert bimanual demonstra-tionsD, and later relax this assumption to two sets of expert unimanual demonstrations DaandDsto avoid the challenges of collecting bimanual demonstrations. Each demonstration is a se-quence of observation, action pairs that constitute an expert trajectory. First, we consider bimanualdemonstrations [(o1, as1, aa1),(o2, as2, aa2), . . .]∈ D to discuss the challenges of learning a Mono-lithic policy. Next, we pivot to decoupling the bimanual policy with two unimanual datasets:[(o1, aa1),(o2, aa2), . . .]∈ Daand[(o1, as1),(o2, as2), . . .]∈ Ds.3.1 Monolithic 14-DoF PolicyLet us first consider learning a monolithic 14-DoF policy πθ(ast, aat|ot)parametrized by θvia be-havioral cloning, which takes an observation otas input and outputs a bimanual action (ast, aat). Weaim to find a policy that matches the expert demonstrations in Dby minimizing this supervised loss:L(θ) =−E(o,as,aa)∼Dlogπθ(as, aa|o). (1)While this is feasible in theory, in practice learning policies in this way is highly dependent on cleanand consistent demonstration data for both arms acting in concert. However, as mentioned in Sec-tion 2, collecting such data is challenging and these difficulties are further exacerbated for preciseand dexterous tasks. Motivated by stabilizing structures across many bimanual tasks, we sidestepthese challenges by utilizing a task-agnostic role-assignment while learning bimanual policies.3.2 Stabilizing for Reducing Control DimensionalityWe observe that a wide variety of human bimanual tasks leverage a similar paradigm: one armstabilizes objects in the scene to simplify the environment while the other arm acts to accomplishthe task. We translate this observation into a generalizable robotics insight: assign either a stabilizingor acting role to each arm to specify a coordination framework. Thus, we decompose our bimanualpolicy ⟨ast, aat⟩ ∼π(·|ot)into two role-specific policies: a stabilizing policy ast∼πsθs(·|ot, aat)andan acting policy aat∼πaθa(·|ot, ast). These policies are co-dependent; we aim to disentangle them.Given these roles, we make a crucial insight: for a given acting policy subtrajectory (aai, aai+k), thereexists a single stabilizing action a ̄sthat works as a “fixture” for holding constant a task-specific partof the environment. For example, consider the role of a fork pinning a steak to a plate to facilitatecutting with the knife. These stabilizing fixtures act to reduce the dimensionality of the controlproblem for the other arm, as the environment is less susceptible to drastic changes. We charac-terize this constant task-specific region with a learned task-relevant representation φ:O ↦→ Rjfor some j, and we later instantiate a stabilizing fixture a ̄swith a keypoint representation in Sec-tion 4.1 and execute non-compliant motions at this keypoint. Finally, we isolate our stabilizingpolicy πsθs(ast|φ(ot−1), φ(ot))from the acting policy with a loss that penalizes the expected changein a task-relevant region of the environment:L(θs) =k∑︂t=0Eaat∼πaθa(·|ot,ast)||φ(ot)−φ(ot−1)||. (2)4Given the stabilizing action a ̄st∼πsθs(φ(ot−1), φ(ot)), we obtain an acting action aat∼πaθa(·|ot, a ̄st). This stabilizing action is valid for ktimesteps, afterwards which the stabilizing fixturemust be updated. To obtain this variable length k, we threshold the change in φ(oi+n)from theinitial observation φ(oi)to indicate when a stabilizing fixture is no longer effective:k= inf{n:n≥iand||φ(oi+n)−φ(oi)||> ε} (3)In practice, we instantiate the task-relevant representation to be stabilized φ(ot)as a keypoint modellearned from expert demonstrations (using a learned mapping from an image to a keypoint fk:i↦→ws). We do not solve Eq. (2) but instead utilize a noncompliant controller to hold this point stationaryover time (see Section 4.1). Given a stabilizing fixture that is effective for acting actions aa[i,i+k],we additionally learn a restabilizing classifier fr(ot) ={0,1}that determines when khas beensurpassed and a new learned stabilizing action should be predicted. We describe this implementationfurther in Section 4 and show in our experiments in Section 5 that this approximation holds.4 BUDS: BimanUal Dexterity from StabilizationWe describe BimanUal Dexterity from Stabilization (BUDS), which instantiates the stabilizing andacting role assignments in Section 3. As shown in Fig. 1, we learn a model for each role: fkθfor stabilizing and πaφfor acting, parameterized by weights θandφ. We also learn a restabilizingclassifier frψ, parameterized by weights ψ. All models are learned from human-annotated imagesor single-arm teleoperated robot demonstrations, avoiding the difficulties of collecting bimanualdemonstrations. All labels and demonstrations are consistent across both arms for any given image.4.1 Learning a Stabilizing Policy Algorithm 1 Stabilizing with BUDS1:while Task Incomplete do2: wˆst=fkθ(ot)3: while frψ(ot) = 0 do4: ast=πs(wˆst−1, wˆst)▷{ast:wˆst≃wˆst−1}5: aat∼πaφ(ot, ast)6: Execute ast, aat. Observe ot+1, frψ(ot+1).7: end while8:end whileFrom Section 3, we aim to find a sta-bilizing policy π ̄s(s(ot−1), s(ot)) = a ̄st.Specifically, we aim to learn a task-specific representation sto be stabilized.We observe that when humans stabilize inbimanual tasks, they hold a point station-ary over time . Thus, Dscontains two ac-tion types: stationary or zero-actions thathold a point in place and transient actions that move between stabilizing positions. Additionally,this observation implies scan be instantiated as a mapping from an observation otto a stabilizationposition wst. We decompose the stabilizing role into two parts: (1) selecting a stabilization positionwsto hold stationary and (2) sensing when to update the stabilization position (as in Eq. (3)).We parameterize wsas a keypoint on an overhead image of the workspace. We use a ResNet-34 [46] backbone to learn a mapping fkθ:R640×480×3→R640×480, which takes as input anoverhead image and outputs a Gaussian heatmap centered around the predicted stabilizing keypointwˆs. This mapping is learned from the stationary actions in the demonstration data Ds, indicatingthat the arm is at a stabilizing position in this demonstration. In practice, we bypass the need forfull trajectory demonstrations and provide supervision in the form of keypoint annotations. Givenwˆsand a depth value from the overhead camera, a non-compliant controller grasps this 3D pointand holds it stationary. Thus, we approximate the stabilizing action astwith the action that keepsthe keypoint stationary, i.e., wˆst≈wˆst−1. We can then write πs(s(ot−1), s(ot))asπs(wˆst−1, wˆst),a function of two consecutive keypoints learned from demonstrations: wˆst=fkθ(ot). The learnedkeypoint mapping fkθis trained with a hand-labelled dataset of 30 image and keypoint pairs, wherethe keypoint is annotated as the stabilizing keypoint wstfor the image. We fit a Gaussian heatmapcentered at the annotation with a standard deviation of 8px. This dataset is augmented 10X with aseries of label-preserving image transformations [47] (see Appendix A). From this dataset, fkθlearnsto predict the keypoint wˆsfor the stabilizing policy to hold stationary.To determine when to update ws, we close the feedback loop by learning a restabilizing classifierfrψ:R640×480×3→ {0,1}that maps input workspace images to a binary output indicating whether5or not to update ws. This mapping is learned from the transient actions in the demonstration dataDs—indicating that the stabilizing positions at these states need to be updated. In practice, weforgo using full trajectory demonstrations for supervision in the form of binary expert annotations.We instantiate frψwith a ResNet-34 [46] backbone and train this classifier with an expert-labelleddataset of 2000 images. For each rollout, an expert assigns when in the rollout a new stabilizingposition wsis needed; the preceding images are labelled 0while the following images are labelled1. This dataset is augmented 2X with affine image transformations (See Appendix A for details). frψlearns to predict a binary classification of when the stabilizing point is no longer effective and needsto be updated with fkθ. Together, fkθandfrψdefine a stabilizing policy πsas outlined in Algorithm 1.4.2 Learning an Acting PolicyGiven a stabilization policy πs(wˆst−1, wˆst), an acting policy πaφlearns to accomplish the task in asimpler stationary environment. We instantiate πaφwith a BC-RNN architecture that is trained on20 single-arm demonstrations. A expert teleoperates the acting arm using a SpaceMouse [48], a 3Djoystick interface During data collection, the stabilizing arm is assumed to be in the expert-labelledstabilizing position wsand the environment is in a simplified state. πaφoptimizes the standardimitation learning loss as defined in Eq. (1), and we refer the reader to Appendix A for more details.To further increase sample efficiency, we assume that our expert acting demonstrations start froma pre-grasped initial position. To achieve this pre-grasped position, we train an optional graspingkeypoint model fgfor the acting policy that maps input workspace images it∈R640×480×3to aGaussian heatmap centered around the grasp point. This grasping model is instantiated with thesame ResNet-34 [46] and dataset parameters as used for the stabilizing keypoint model fkθ. Theacting arm moves to the keypoint position in a fixed orientation, and grasps to begin the task.5 ExperimentsWe validate BUDS on four diverse bimanual tasks. We use two UR16e arms each with a Robotiq2F-85 gripper, mounted at a 45◦angle off a vertical mount, 0.3m apart. We use a RTDE-basedimpedance controller [49] and associated IK solver operating at 10Hz on an Intel NUC. End effectorsmove along a linear trajectory between positions. All grasps use a grasping force of 225N and a fixedorientation. We use three Intel Realsense cameras: two 435 cameras mounted at a side view and onthe robot wrist, and one 405 camera mounted overhead. For additional details, see Appendix B.Bimanual Tasks. We consider four bimanual tasks, as shown in Fig. 3, and test the generalizationof BUDS to unseen objects (Fig. 2). Each task requires both a high-precision acting policy anda dynamic stabilizing policy that restabilizes multiple times during task execution. We emphasizethe complexity of the coordination required of these dexterous tasks. Together, these four tasksrepresent a wide range of real-world bimanual manipulation tasks, which highlights the prevalenceof the stabilizing and acting role assignments. For all tasks, we vary the initial position of all objectsover each trial. For more details and videos, see Appendix B and our website.•Pepper Grinder. We grind pepper on three plates in order of color—yellow, pink, then blue asshown in Fig. 3. This task requires restabilizing the pepper grinder over each plate in succession.•Jacket Zip. We zip a jacket by pinning down the jacket’s bottom and pulling the zipper to the top.Due to the jacket’s deformability, the robot must pin the jacket as close as possible to the zipper.We train all models with a red jacket, and the keypoint models on two more jackets: dark grey andblue. We aim to generalize to light grey and black jackets with different material and zippers.•Marker Cap. We cap three markers in sequence from bottom to top of the workspace. This taskrequires restabilizing after each marker is capped. We train all policies on red, green, and blueCrayola markers and test generalization with Expo and Redimark markers.•Cut Vegetable. We cut a vegetable half (7-9cm) into four 1-4cm pieces with three cuts. This taskrequires restabilizing the grasp on the vegetable as each cut is made, as the stabilizing arm shouldhold the vegetable as close as possible to the cut to prevent tearing and twisting. We train on ajalape ̃no and test generalization with zucchini halves (15-18cm) and celery sticks (8-10cm).6Figure 3: Experiment Rollouts : We visualize BUDS experiment rollouts. All tasks alternate be-tween updating a stabilizing position wswhile the acting arm is paused and executing an actingpolicy while the stabilizing arm holds steady. We visualize both wsand the acting actions.Figure 2: Task Generalization : We presenttheSeen andUnseen objects in the JacketZip, Marker Cap, and Cut Vegetable tasks.We classify these OOD objects into twoclasses, Easy and Hard, based on their visualsimilarity to the objects seen during training.Baselines. BC-Stabilizer illustrates the need fora low-dimensional stabilizing representation by re-placing the stabilizing keypoint model fkθwith apolicy learned from trajectory demonstrations. Thispolicy is instantiated with the same BC-RNN ar-chitecture and training procedure as BUDS’s actingpolicy. An oracle classifier determines when BC-Stabilizer has reached a valid stabilizing position,where a noncompliant controller then holds the pointstationary as in BUDS while the pre-grasped actingpolicy from BUDS accomplishes the task. When therestabilizing classifier from BUDS frψis triggered,the process repeats. No-Restable ablates BUDS’srestabilizing classifier and only senses a single sta-blizing point at the beginning of each task. We evaluate No-Restable only on Jacket Zip and CutVegetable because other tasks require an updated stabilizing position to reach complete success. Wedo not compare to a Monolithic baseline (as in Section 3.1) as it achieves zero success for all tasks.Task BC-Stabilizer No-Restable BUDSBUDS FailureswˆsπafrψGPepper Grinder 39.9 ±21 – 100±0 0 0 0 0Jacket Zip (Clean) 28.2 ±24 58.8 ±39 72.1±18 0 3 3 1Jacket Zip (Occluded) 21.6 ±17 51.1 ±37 55.7±37 1 2 1 2Marker Cap 0.0 ±0 – 90.1±16 1 2 0 0Cut Vegetable 15.0 ±17 46.6 ±28 66.8±24 2 4 0 3Table 1: Physical Results: We report average percent success and standard deviation across 10 trialsof 4 bimanual tasks with randomly initialized object positions. For Jacket Zip, we classify initialconfigurations as Clean or Occluded, where none or up to 30% of the zipper is occluded respectively.We report 4 failure modes: wˆsstabilizing keypoint, πaacting policy, frψrestabilizing, and (G)poorgrasps. We compare to two baselines: BC-Stabilizer where a single-arm IL policy replaces thestabilizing keypoint model, and No-Restable, an ablation of BUDS that disregards restabilizing.We evaluate BUDS on four bimanual tasks that require dynamic restabilizing. Task success is mea-sured as the proportion of task completed over total amount to be completed, for example zippedlength over total zipper length. As shown in Table 1, BUDS achieves 76.9% success across four7tasks, visualized in Fig. 3. We report four failure modes: (1) an incorrect predicted stabilizing po-sition ws, (2) an acting policy failure πa, (3) a restabilizing error frψthat does not detect when astabilizing point needs updating, and (4) a failed grasp. The acting policy failure is the most com-mon, due to the low amount of data used to train the acting policy and the high precision required.The stabilizing failures ( wsandfrψ) are mostly due to large visual differences from the training data,including occlusions, and cause the environment to quickly move out of distribution from the stable,simplified states seen in the acting policy training data. Across all tasks, BUDS outperforms theunstructured BC-Stabilizer baseline due to the high precision required of a stabilizing role. WhereBUDS and BC-Stabilizer both learn a relevant point from a visual input, BC-Stabilizer must alsolearn the policy to reach this position. Thus, the BC-Stabilizer policy’s primary failure mode isselecting a poor stabilizing position—it struggles to learn a stabilizing policy robust across manytask configurations, as indicated by its 20.9% success rate. BUDS also outperforms No-Restable inJacket Zip (Clean) and Cut Vegetable, highlighting the need for closed-loop restabilizing. BUDSand No-Restable achieve similar success on Jacket Zip (Occluded) because the biggest challenge inthis task is the jacket’s deformability and occlusions, which restabilizing alone cannot solve.TaskBUDS OOD 40-Demo BUDS FailuresEasy Hard Hard wsπafrψGJacket Zip 62.3 ±40 28.8 ±27 23.1 ±25 10 3 0 2Marker Cap 60.0 ±14 53.3 ±39 56.7 ±39 17 1 0 0Cut Vegetable 85.0 ±13 26.6 ±26 30.0 ±33 4 6 0 6Table 2: Generalizability Results: We test BUDS’s robustness to OOD objects of similar morphol-ogy. The Easy and Hard OOD objects are respectively more and less similar in visual appearanceand dynamics to training objects (Fig. 2). We report average and standard deviation success overten trials per object, along with failure modes over 20 trials. We compare to 40-Demo, whose actingpolicy is trained on 40 demonstrations, but do not observe a performance difference on Hard objects.We test BUDS’s generalizability to out-of-distribution (OOD) objects classified into two classesbased on visual similarity to training objects (Fig. 2). We run 10 trials per object, and find BUDSachieves an average success rate of 52.7% (Table 2). In two of the three tasks, we observe a slightperformance drop compared to in distribution settings (Table 1), with a worsening difference forHard objects. With this expected performance drop, we observe more stabilizing failures ( wsandfrψ) due to the stabilizing policy’s high visual dependence, which struggles with novel object ap-pearances. For Jacket Zip, we attempt to improve performance by training the stabilizing keypointmodel fkθon three jackets, but the policy still falls short on the vastly different Hard black jacket.40-Demo aims to improve robustness by training the acting policy on double the data, but again doesnot significantly improve performance due to the Hard objects’ large visual and dynamic differencescompared to the training objects, which cannot be remedied with more in-distribution data. We notean exception: Easy zucchini in Cut Vegetable has a higher success rate than that of the in-distributionjalape ̃no. The hollow jalape ̃no twists and tears, which is unforgiving of slight acting policy errors,while the solid zucchini can withstand shear forces from noisy policies, yielding more success.6 ConclusionWe present BUDS, a system for dexterous bimanual manipulation that leverages a novel role assign-ment paradigm: a stabilizing arm holds a point stationary for the acting arm to act in a simplifiedenvironment. BUDS uses a learned keypoint as the stabilizing point and learns an acting policy fromunimanual trajectory demonstrations. BUDS also learns a restabilization classifier to detect when astabilizing point should be updated during rollouts. BUDS achieves 76.9% and 52.7% success onfour bimanual tasks with objects seen and unseen from training respectively.Limitations and Future Work. Because BUDS uses only visual inputs, it struggles with visu-ally different novel objects unseen during training—BUDS can zip many jackets but struggles withdresses. Thus, BUDS also falls short when tactile feedback is critical, such as plugging in a USB.BUDS assumes fixed roles in each task, which would not hold for tasks where the arms must alter-nate. In future work, we will explore policies for role assignment, which could be planned to avoidcollisions or learned to enable more nuanced tradeoffs. We will incorporate tactile sensing for moresensitive stabilizing, towards tasks like buttoning a shirt.8AcknowledgmentsThis project was sponsored by NSF Awards 2006388, 2125511, and 2132847, the Office of NavalResearch (ONR), Air Force Office of Scientific Research YIP award, and the Toyota Research Insti-tute. Jennifer Grannen is further grateful to be supported by an NSF GRFP. Any opinions, findings,conclusions or recommendations expressed in this material are those of the authors and do not neces-sarily reflect the views of the sponsors. We additionally thank our colleagues who provided helpfulfeedback and suggestions, especially Suneel Belkhale and Sidd Karamcheti.References[1] D. P. Losey, M. Li, J. Bohg, and D. Sadigh. Learning from My Partner’s Actions: Roles inDecentralized Robot Teams. In Conf. on Robot Learning (CoRL) , 2019.[2] E. Ng, Z. Liu, and M. Kennedy III. It Takes Two: Learning to Plan for Human-Robot Cooper-ative Carrying. In Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , 2023.[3] J. Grannen, Y . Wu, S. Belkhale, and D. Sadigh. Learning Bimanual Scooping Policies for FoodAcquisition. In Conf. on Robot Learning (CoRL) , 2022.[4] S. H ̈ofer, K. Bekris, A. Handa, J. C. Gamboa, M. Mozifian, F. Golemo, C. Atkeson, D. Fox,K. Goldberg, J. Leonard, C. Karen Liu, J. Peters, S. Song, P. Welinder, and M. White. Sim2Realin Robotics and Automation: Applications and Challenges. IEEE Transactions on AutomationScience and Engineering , 18(2):398–400, 2021. doi:10.1109/TASE.2021.3064065.[5] Z. Fu, X. Cheng, and D. Pathak. Deep Whole-Body Control: Learning a Unified Policy forManipulation and Locomotion. CoRL , 2022.[6] Y . Chen, Y . Yang, T. Wu, S. Wang, X. Feng, J. Jiang, S. M. McAleer, H. Dong, Z. Lu, and S.-C.Zhu. Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning,2022.[7] S. Kataoka, S. K. S. Ghasemipour, D. Freeman, and I. Mordatch. Bi-Manual Manipulation andAttachment via Sim-to-Real Reinforcement Learning, 2022.[8] S. Stepputtis, M. Bandari, S. Schaal, and H. B. Amor. A system for imitation learning ofcontact-rich bimanual manipulation policies. In 2022 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , 2022.[9] ́I ̃nigo Elguea-Aguinaco, A. Serrano-Mu ̃noz, D. Chrysostomou, I. Inziarte-Hidalgo, S. Bøgh,and N. Arana-Arexolaleiba. A review on reinforcement learning for contact-rich robotic ma-nipulation tasks. Robotics and Computer-Integrated Manufacturing , 81:102517, 2023. ISSN0736-5845.[10] O. Kroemer, S. Niekum, and G. Konidaris. A Review of Robot Learning for Manipulation:Challenges, Representations, and Algorithms. Journal of Machine Learning Research , 22(30):1 – 82, January 2021.[11] L. P. Ureche and A. Billard. Constraints extraction from asymmetrical bimanual tasks and theiruse in coordinated behavior. Robotics and Autonomous Systems , 103:222–235, 2018. ISSN0921-8890. doi:https://doi.org/10.1016/j.robot.2017.12.011.[12] C. Smith, Y . Karayiannidis, L. Nalpantidis, X. Gratal, P. Qi, D. V . Dimarogonas, and D. Kragic.Dual arm manipulation—A survey. Robotics and Autonomous Systems , 60(10):1340–1353,2012. ISSN 0921-8890. doi:https://doi.org/10.1016/j.robot.2012.07.005.[13] R. Lioutikov, O. Kroemer, G. Maeda, and J. Peters. Learning manipulation by sequenc-ing motor primitives with a two-armed robot. 302:1601–1611, 01 2016. doi:10.1007/978-3-319-08338-4 115.9[14] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware, 2023.[15] F. Xie, A. Chowdhury, M. C. D. P. Kaluza, L. Zhao, L. L. S. Wong, and R. Yu. Deep imitationlearning for bimanual robotic manipulation. 2020.[16] N. Figueroa and A. Billard. Learning Complex Manipulation Tasks from Heterogeneous andUnstructured Demonstrations. In Proceedings of Workshop on Synergies between Learningand Interaction. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.[17] A. Batinica, B. Nemec, A. Ude, M. Rakovi ́c, and A. Gams. Compliant movement primitives ina bimanual setting. In 2017 IEEE-RAS 17th International Conference on Humanoid Robotics(Humanoids) , pages 365–371, 2017. doi:10.1109/HUMANOIDS.2017.8246899.[18] A. Colom ́e and C. Torras. Dimensionality Reduction for Dynamic Movement Primitives andApplication to Bimanual Manipulation of Clothes. IEEE Transactions on Robotics , 34(3):602–615, 2018. doi:10.1109/TRO.2018.2808924.[19] A. Colom ́e and C. Torras. Reinforcement Learning of Bimanual Robot Skills . Springer Cham,2020.[20] G. Franzese, L. de Souza Rosa, T. Verburg, L. Peternel, and J. Kober. Interactive imitationlearning of bimanual movement primitives, 2022.[21] R. Chitnis, S. Tulsiani, S. Gupta, and A. Gupta. Efficient Bimanual Manipulation UsingLearned Task Schemas. In Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , 2020.[22] W. Z. Wang, A. Shih, A. Xie, and D. Sadigh. Influencing towards stable multi-agent interac-tions. In Proceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[23] J. Grannen, P. Sundaresan, B. Thananjeyan, J. Ichnowski, A. Balakrishna, M. Hwang,V . Viswanath, M. Laskey, J. E. Gonzalez, and K. Goldberg. Untangling Dense Knots by Learn-ing Task-Relevant Keypoints. In Conf. on Robot Learning (CoRL) , 2020.[24] Y . Avigal, L. Berscheid, T. Asfour, T. Kr ̈oger, and K. Goldberg. SpeedFolding: LearningEfficient Bimanual Folding of Garments. In Proc. IEEE/RSJ Int. Conf. on Intelligent Robotsand Systems (IROS) , 2022.[25] A. Ganapathi, P. Sundaresan, B. Thananjeyan, A. Balakrishna, D. Seita, J. Grannen, M. Hwang,R. Hoque, J. E. Gonzalez, N. Jamali, et al. Learning Dense Visual Correspondences in Simula-tion to Smooth and Fold Real Fabrics. Proc. IEEE Int. Conf. Robotics and Automation (ICRA) ,2021.[26] F. Amadio, A. Colom ́e, and C. Torras. Exploiting Symmetries in Reinforcement Learningof Bimanual Robotic Tasks. IEEE Robotics and Automation Letters , 4(2):1838–1845, 2019.doi:10.1109/LRA.2019.2898330.[27] X. Yin and Q. Chen. Learning nonlinear dynamical system for movement primitives. In 2014IEEE International Conference on Systems, Man, and Cybernetics (SMC) , pages 3761–3766,2014. doi:10.1109/SMC.2014.6974516.[28] L. Fu, H. Huang, L. Berscheid, H. Li, K. Goldberg, and S. Chitta. Safely Learning Visuo-Tactile Feedback Policies in Real For Industrial Insertion, 2022.[29] R. Chitnis, S. Tulsiani, S. Gupta, and A. Gupta. Intrinsic Motivation for Encouraging Syner-gistic Behavior. In International Conference on Learning Representations , 2020.[30] H. Ha and S. Song. Flingbot: The unreasonable effectiveness of dynamic manipulation forcloth unfolding. In Conf. on Robot Learning (CoRL) , 2021.10[31] R. Z ̈ollner, T. Asfour, and R. Dillmann. Programming by Demonstration: Dual-Arm Manipula-tion Tasks for Humanoid Robots. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 479–484, 2004.[32] P. Hsu. Coordinated control of multiple manipulator systems. IEEE Transactions on Roboticsand Automation , 9(4):400–410, 1993. doi:10.1109/70.246051.[33] S. S. Mirrazavi Salehian, N. Figueroa, and A. Billard. Dynamical System-Based Motion Plan-ning for Multi-Arm Systems: Reaching for Moving Objects. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17 , pages 4914–4918,2017. doi:10.24963/ijcai.2017/693.[34] P. Lertkultanon and Q.-C. Pham. A certified-complete bimanual manipulation planner. InIEEE Transactions on Automation Science and Engineering , pages 1355–1368, 2018. doi:10.1109/TASE.2018.2791478.[35] J. Gudi ̃no Lau. Dynamic model and simulation of cooperative robots: A case study. Robotica ,23:615 – 624, 09 2005. doi:10.1017/S0263574704001213.[36] N. Xi, T.-J. Tarn, and A. Bejczy. Intelligent planning and control for multirobot coordination:An event-based approach. IEEE Transactions on Robotics and Automation , 12(3):439–452,1996. doi:10.1109/70.499825.[37] C. Bersch, B. Pitzer, and S. Kammel. Bimanual robotic cloth manipulation for laundry folding.In2011 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 1413–1419, 2011. doi:10.1109/IROS.2011.6095109.[38] S. H. Lee and M. R. Cutkosky. Fixture Planning With Friction. Journal of Engineering forIndustry , 113(3):320–327, 08 1991. ISSN 0022-0817. doi:10.1115/1.2899703. URL https://doi.org/10.1115/1.2899703 .[39] H. Asada and A. By. Kinematic analysis of workpart fixturing for flexible assembly withautomatically reconfigurable fixtures. IEEE Journal on Robotics and Automation , 1(2):86–94,1985. doi:10.1109/JRA.1985.1087007.[40] L. Shao, T. Migimatsu, and J. Bohg. Learning to Scaffold the Development of Robotic Manip-ulation Skills. In Proc. IEEE Int. Conf. Robotics and Automation (ICRA) , 2020.[41] R. Holladay, T. Lozano-P ́erez, and A. Rodriguez. Robust planning for multi-stage forcefulmanipulation. In Int. Journal of Robotics Research (IJRR) , 2022.[42] L. Chen, L. F. C. Figueredo, and M. Dogar. Manipulation Planning under Changing ExternalForces. In Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS) , 2018.[43] Y . Watanabe, K. Nagahama, K. Yamazaki, K. Okada, and M. Inaba. Cooking behavior withhandling general cooking tools based on a system integration for a life-sized humanoid robot.Paladyn, Journal of Behavioral Robotics , 4(2):63–72, 2013. doi:doi:10.2478/pjbr-2013-0013.URL https://doi.org/10.2478/pjbr-2013-0013 .[44] K. Zhang, M. Sharma, M. Veloso, and O. Kroemer. Leveraging Multimodal Haptic SensoryData for Robust Cutting. In IEEE-RAS International Conference on Humanoid Robots , 2019.[45] K. Yamazaki, Y . Watanabe, K. Nagahama, K. Okada, and M. Inaba. Recognition and ma-nipulation integration for a daily assistive robot working on kitchen environments. In 2010IEEE International Conference on Robotics and Biomimetics , pages 196–201, 2010. doi:10.1109/ROBIO.2010.5723326.[46] K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. InB. Leibe, J. Matas, N. Sebe, and M. Welling, editors, European Conference on ComputerVision , pages 630–645. Springer International Publishing, 2016.11[47] A. B. Jung, K. Wada, J. Crall, S. Tanaka, J. Graving, C. Reinders, S. Yadav, J. Banerjee,G. Vecsei, A. Kraft, Z. Rui, J. Borovec, C. Vallentin, S. Zhydenko, K. Pfeiffer, B. Cook,I. Fern ́andez, F.-M. De Rainville, C.-H. Weng, A. Ayala-Acevedo, R. Meudec, M. Laporte,et al. imgaug. https://github.com/aleju/imgaug , 2020. Online; accessed 01-Feb-2020.[48] Dec 2022. URL https://3dconnexion.com/us/product/spacemouse-compact/ .[49] 2023. URL https://sdurobotics.gitlab.io/ur_rtde/ .[50] D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conferenceon Learning Representations , 12 2014.[51] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In Conf. on Robot Learning (CoRL) , 2021.12Stabilize to Act: Learning to Coordinate forBimanual Manipulation Supplementary MaterialA Training DetailsAugmentation ParametersLinearContrast (0.95,1.05)Add (−10,10)GammaContrast (0.95,1.05)GaussianBlur (0.0,0.6)MultiplySaturation (0.95,1.05)AdditiveGaussianNoise (0,3.1875)Scale (1.0,1.2)Translate Percent (−0.08,0.08)Rotate (−15◦,15◦)Shear (−8◦,8◦)Cval (0,20)Mode [‘constant’, ‘edge’]Table 3: Image Data Augmentation Parame-ters: We report the parameters for the data aug-mentation techniques used to train the stabiliz-ing policy’s stabilizing position and restabilizingclassifier models in BUDS. All augmentations areused from the imgaug Python library [47].We provide details for training each of the mod-els for BUDS: fkθandfrψfor the stabilizing pol-icy and πaφandfgfor the acting policy.A.1 Stabilizing Policy TrainingThe keypoint models fkθis trained with a hand-labelled dataset of 30 pairs of images andhuman-annotated keypoints. We augment eachimage 10X with a series of label-preservingtransformations from ImgAug library [47], in-cluding rotation, blurring, hue and saturationchanges, affine transformations, and addingGaussian Noise. The detailed parameters forthe transformations are listed in Table 3 andwe visualize the image augmentations in Fig. 5.The restabilizing classifier frψis trained on adataset of images from 20 demonstration roll-outs with 100 images each. Each image ispaired with binary expert annotation of whetheror not restabilizing is needed and augmented by 2X with the same image transformations fromabove.Figure 4: Experimental Setup : Wepresent our experimental setup, whichuses three cameras due to heavy occlu-sion during manipulation. One camerais mounted overhead, one is on the wristof the right arm, and one is facing thefront of the workspace at an angle.Both the keypoint model and the restabilizing classifierare trained against a binary cross-entropy loss with anAdam [50] optimizer. The learning rate is 1.0e−4and theweight decay is 1.0e−4during the training process. Wetrain these models for 25 epochs on a NVIDIA GeForceGTX 1070 GPU for 1 hour.A.2 Acting Policy TrainingThe acting policy starts from a pre-grasped position,which we achieve using an optional grasping keypointmodel. The training procedure of grasping keypointmodel fgis the same as that of stabilizing keypoint modelfrθ. After the robotic gripper grasps the object, we collect20 acting demonstration rollouts, each with between 50and 200 steps. The variation of 20 demonstrations comesfrom the randomization of initial object position, differ-ences in object shape and dynamics, and variations ingrasps. With these demonstrations, we use one set of hy-perparameters for all tasks to train a BC-RNN model sim-ilar to prior work [51]. We load batches of size 100with ahistory length of 20. We learn policies from input imagesand use a ResNet-18 [46] architecture encoder which istrained end-to-end. These image encodings are of size 64and are then concatenated to the proprioceptive input ptto be passed into the recurrent neural network which usesa hidden size of 1000 . We train against the standard imitation learning loss with a learning rate of131e−4and a weight decay of 0. We train for 150k epochs on NVIDIA GeForce GTX 1070 GPU for16 hrs.Figure 5: Data Augmentation for Image Datasets : We visualize images from the augmentateddataset used to train the stabilizing position model and restabilizating classifier for the marker cap-ping task’s stabilizing policy: fkθandfrψ. For fkθ, the dataset of expert-labelled image and keypointannotations is augmented 10X to construct a final dataset of size 300. For frψ, the dataset is aug-mented 2X for a final size of 4000 image and binary classification pairs.B Experiment DetailsFor all tasks, BUDS’s acting policy uses a 3D action space. For the three tasks other than PepperGrinder, this action space represents change in end effector position, (∆x,∆y,∆z). For the PepperGrinder task, this action space instead represents the change in end effector roll, pitch, and yaw,due to safety concerns involving the closed chain constraint created by using both arms to grasp thepepper grinder tool.Task CamerasPepper Grinder Overhead, SideJacket Zip Overhead, SideMarker Cap Overhead, WristCut Vegetable Wrist, SideTable 4: Task-Specific Cameras: We report thecameras used for obtaining images as input for theacting policy and restabilizing classifier by task.All tasks use the overhead camera for the sta-bilizing keypoint model and grasping model in-puts. Depending on the task and the types ofocclusion present during manipulation, we usetwo of the three cameras for the acting policyand the restabilizing classifier as outlined in Ta-ble 4.We use the optional grasping model fgfor alltasks except the Pepper Grinder task to ac-count for variations of the intial positions ofthe jacket, markers, and vegetables. Instead forthe Pepper Grinder task, the acting arm insteadmoves to the point corresponding to the end effector position of the stabilizing arm, and grasps ata fixed height above the stabilizing arm corresponding to the height of the pepper grinder. Thepepper grinder begins pregrasped in the stabilizing robot hand, but the plate positions are randomlyinitialized.In the BC-Stabilizer baseline, the stabilizing policy learned via imitation learning is trained withthe same procedure as the acting policy for BUDS, with the exception of using an output of twoGaussian mixtures to cover the 3D (∆x,∆y,∆z)action space.14 |
9al6taqfTzr | Open-World Object Manipulation usingPre-Trained Vision-Language ModelsAustin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan,Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani,Brianna Zitkovich, Fei Xia, Chelsea Finn, Karol HausmanRobotics at GoogleAbstract: For robots to follow instructions from people, they must be able toconnect the rich semantic information in human vocabulary, e.g. “can you get methe pink stuffed whale?” to their sensory observations and actions. This bringsup a notably difficult challenge for robots: while robot learning approaches allowrobots to learn many different behaviors from first-hand experience, it is imprac-tical for robots to have first-hand experiences that span all of this semantic infor-mation. We would like a robot’s policy to be able to perceive and pick up the pinkstuffed whale, even if it has never seen any data interacting with a stuffed whalebefore. Fortunately, static data on the internet has vast semantic information, andthis information is captured in pre-trained vision-language models. In this paper,we study whether we can interface robot policies with these pre-trained models,with the aim of allowing robots to complete instructions involving object cate-gories that the robot has never seen first-hand. We develop a simple approach,which we call Manipulation of Open-World Objects (MOO), which leverages apre-trained vision-language model to extract object-identifying information fromthe language command and image, and conditions the robot policy on the currentimage, the instruction, and the extracted object information. In a variety of exper-iments on a real mobile manipulator, we find that MOO generalizes zero-shot toa wide range of novel object categories and environments. In addition, we showhow MOO generalizes to other, non-language-based input modalities to specifythe object of interest such as finger pointing, and how it can be further extended toenable open-world navigation and manipulation. The project’s website and evalu-ation videos can be found at https://robot-moo.github.io/ .1 IntroductionFor a robot to be able to follow instructions from humans, it must cope with the vast variety oflanguage vocabulary, much of which may refer to objects that the robot has never interacted withfirst-hand. For example, consider the scenario where a robot has never seen or interacted with a plushanimal from its own camera, and it is asked, “can you get me the pink stuffed whale?” How can therobot complete the task? While the robot has never interacted with this object category before, theinternet and other data sources cover a much wider set of objects and object attributes than the robothas encountered in its own first-hand experience. In this paper, we study whether robots can tap intothe rich semantic knowledge captured in such static datasets, in combination with the robot’s ownexperience, to be able to complete manipulation tasks involving novel object categories.Computer vision models can capture the rich semantic information contained in static datasets.Indeed, composing modules for perception, planning, and control in robotics pipelines is a long-standing approach [1, 2, 3], allowing robots to perform tasks with a wide set of objects [4]. How-ever, these pipelines are notably brittle, since the success of latter motor control modules relies onprecise object localization. On the other hand, several prior works have trained neural networkpolicies with pre-trained image representations [5, 6, 7, 8] and pre-trained language instruction em-beddings [9, 10, 11, 12]. While this form of vanilla pre-training can improve efficiency and gen-eralization, it does not provide a mechanism for robots to ground and manipulate novel semanticconcepts, such as unseen object categories referenced in the language instruction. This leads to acrossroads — some approaches can conceivably generalize to many object categories but rely onfragile pipelines; others are less brittle but cannot generalize to new semantic object categories.Indicates equal contribution. Please direct correspondence to tedxiao@google.com .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Pr o mp t + Ob j e c t N ame Ob j e c t D e t e c t i o n wi t h VLM VLM T r ain o n 106 o b j e c t s, 5 9k d emo s C li ck o n Im a g e P o in t wi t h F in g er G ener aliz a t i o n t o N o v el Ob j e c t s In s tru c t i o n Be y o nd Lan g u a g e R e f er enc e Im a g e Figure 1: Overview of MOO. We train a language-conditioned policy conditioned on object locations from afrozen VLM. The policy is trained on demonstrations spanning a set of 106 objects using VLM-based object-centric representations, enabling generalization to novel objects, locations produced from new modalities.To allow robots to generalize to new semantic concepts, we specifically choose to leverage open-vocabulary pre-trained vision-language models (VLMs), rather than models pre-trained on onemodality alone. Such models capture the rich information contained in diverse static datasets, whilegrounding the linguistic concepts into a perceptual representation that can be connected to the robot’sobservations. Crucially, rather than using the pre-trained model for precise state estimation in its en-tirety (akin to pipelined approaches), we only use the VLM to identify the relevant objects in theimage by coarsely localizing them, while allowing an end-to-end trained policy to use this informa-tion along with the original observation to perform the task. More specifically, our system receives alanguage instruction and uses a VLM to identify the 2D image coordinates of objects in the instruc-tion. Along with the image and the instruction, the 2D coordinates of the objects are fed into ourmanipulation policy allowing it to ground the natural language to objects and know which objectsto act upon without seeing any demonstrations with those objects. The VLM is frozen throughoutall of our training, and the policy is trained with the real VLM detector in the loop to prevent thebrittleness that can plague prior pipelined approaches.The main contribution of this paper is a flexible approach for open-world object manipulation thatinterfaces policy learning with pre-trained vision-language models. An overview is given in Fig. 1.The pre-trained models are trained on massive static image and language data that far exceeds therobot’s own experience. The robot’s policy is trained to perform skills using demonstration datacovering a more modest yet still physically diverse set of 106 training objects. The compositionof the pre-trained vision-language model and the control policy leads to an overarching language-conditioned policy that can complete commands that refer to novel semantic categories.We study the performance of our method across 1;472evaluations on a real robotic manipulator,where we find that our approach is significantly more successful than recent robot learning methods.Beyond verbal object descriptions, we also find that the trained policy can be easily combined withother means of communicating intent, e.g., pointing at an object and inferring the object descriptionusing a VLM, showing a generic image of the object of interest, or using a simple GUI. Finally,our experiments further show that our method can be integrated with an open-vocabulary objectnavigation model called Clip-on-Wheels (CoW), to complete mobile manipulation tasks involvingnovel objects. Throughout this paper, we refer to our approach as Manipulation of Open-vocabularyObjects (MOO) and the integrated mobile manipulation system as CoW-MOO.2 Related WorkLeveraging Pre-Trained Models in Robotic Learning. Using off-the-shelf vision, speech, or lan-guage models is a long-standing approach in robotics [13, 14, 10]. Modern pre-trained vision andlanguage models have improved substantially over older models, and have played an increasing rolein robotics research. A large body of prior work has trained policies on top of frozen or fine-tunedvisual representations [5, 15, 6, 16, 17, 18, 19, 7, 8, 20, 21], while other works have leveraged pre-trained language models [22, 23, 9, 10, 11, 24, 25, 12]. Unlike these prior works, we aim to leveragevision-language models that ground language in visual observations. Our use of vision-languagemodels enables generalization to novel semantic object categories, which cannot be achieved byusing vision models or language models individually.Generalization in Robotic Learning. A number of recent works have studied how robots can com-plete novel language instructions [26, 22, 23, 9, 10, 11, 27, 28, 24], typically focusing on instructionswith novel combinations of words, i.e. compositional generalization, or instructions with novel ways2to describe previously-seen objects and behaviors. Our work focuses on how robots can completeinstructions with entirely new words that refer to objects that were not seen in the robot’s demonstra-tion dataset. Other research has studied how robot behaviors like grasping and pick-and-place canbe applied to unseen objects [29, 30, 31, 32, 33, 34, 35, 36, 37], focusing on generalization to visualor physical attributes. Our experimental settings require visual and physical object generalizationbut also require semantic object generalization. That is, unlike these prior works, the robot must beable to ground a description of a previously-unseen object category.Vision-Language Models for Robotic Manipulation. Two closest related works to our approachare CLIPort [38] and PerAct [12] that use the CLIP vision-language model as a backbone of theirpolicy. Both of these approaches have demonstrated impressive level of generalization to unseensemantic categories and attributes. Inspired by these works, we aim to expand them to more generalmanipulation settings by: i) removing the need for depth cameras or camera calibration, ii) expand-ing and demonstrating that the hereby introduced representation works with other modalities suchas pointing to the object of interest, iii) moving beyond 2D manipulation tasks, e.g. demonstratingthe approach on tasks such as reorienting objects upright as well as mobile manipulation tasks.Open-World Object Detection in Computer Vision. Historically, object-detection methods havebeen restricted to a fixed category map covering a limited set of objects [39, 40, 41, 42]. Thesemethods work well for the object categories on which they are trained, but have no knowledge ofobjects outside their limited vocabulary. Recently, a new wave of object detectors have emerged thataim to go beyond simple closed-vocabulary tasks by replacing the fixed one-hot category predictionwith a shared image-language embedding space that can be used to answer open-vocabulary objectqueries [43, 44, 45, 46]. Typically these methods rely on internet-scale data in the form of pairs ofimage and associated descriptive text to learn the grounding of natural language to objects. Variousmethods have been employed to then extract object localization information in the form of bound-ing boxes and segmentation masks. In our work, we use the OWL-ViT detector due to it’s strongperformance in the wild and publicly available implementation [43].3 Manipulation of Open-World Objects (MOO)The key goal of MOO is to develop a policy that can leverage the visually-grounded semantic in-formation captured by pre-trained vision-language models for generalization to object types not inthe policy training set. More specifically, we aim to use the VLM to localize objects described ina given instruction, while allowing the policy to complete the task using both the instruction andthe object localization information from the VLM. MOO accomplishes this in two stages. First,the current observation and the words in the instruction corresponding to object(s) are passed tothe VLM to localize the objects. Then, the object localization information and the instruction sansobject information are passed to the policy, along with the original observation, to predict actions.The key design choice of MOO lies in how to represent object information encoded in VLMs andhow to feed that information to the instruction-conditioned policy. In the remainder of this section,we first overview the set-up, then describe the design of these crucial aspects of the method, andfinally provide an overview of the model architecture and the training procedure as well as describepractical implementation details that allows us to deploy MOO on real robots.3.1 Problem Set-UpFormally, we assume that the robot, with image observations o2O and actions a2A , is pro-vided with a set of expert demonstrations Drobot collected via teleoperation. Each demonstrationcorresponds to a sequence of observation-action pairs f(oj;aj)gTj=1collected over a time horizonT, and is annotated with a structured language instruction `for the task being performed in thedemonstration. To help facilitate object generalization, we assume that these language instructionsare structured as a combination of a template and a list of object descriptions within that template.For example, for the instruction `=“move yellow banana near cup, ” , the template is “move X nearY, ”and the object descriptions are X=“yellow banana” andY=“cup. ” Inspired by RT-1 [24], inthis work, we focus on five different types of skills defining the templates: “ pickX,” “moveXnearY,” “knockXover,“ “placeXupright ,“ and “ placeXintoY,”.All of the objects seen in the demonstrations are drawn from a set Srobot, and our objective is tocomplete new structured language instructions with a seen template but novel objects that are not in3Action mode base arm Figure 2: MOO architecture: We extract object location (represented as the center of the bounding box) onthe first frame of an episode. The segmentation mask is concatenated channel-wise to the input image for eachframe. We remove the language embedding for everything except the task so that the object specific informationis only provided through the object instance mask.Srobot, which also have novel object descriptions. In aiming to complete this goal, our approach willleverage imitation learning and vision-language models, which we briefly review in the Appendix.3.2 Representing Object InformationTo utilize the object knowledge encoded in the VLMs, we need to pick a representation that canbe easily transferred to the text-conditioned control policy. We start by identifying the instructiontemplate (represented by verb v) and object X(or list of objects X;Y;::: ) from the instruction`. Equipped with an object description X, we query a VLM to produce a bounding box of theobject of interest with the prompt q=“An image of an X”, and use the resulting detection (ifany) as conditioning of our policy. To reduce the reliance of the exact segmentation of the objectdimensions, we select a single pixel that is at the center of the predicted bounding box as the objectrepresentation. In the case of one object description, we use a single-channel object mask with thevalue set to 1:0at the pixel of the object’s predicted location. In the case of two object descriptions,we set the pixel value of the first to be 1:0and the second to be 0:5.This design has two main advantages: first, it is a generic representation that works with objects ofany size as long as they are visible, and second, it is compatible with a large selection of vision meth-ods such as bounding boxes or segmentation masks as these can be easily transformed into a single,object-centric pixel location. We ablate other object representation choices in the experiments.Importantly, this approach can handle object descriptions that are not previously seen in the robot’sdemonstration data, as long as it is sufficiently represented in the static large-scale training dataof the VLM. For any unseen objects, we simply include a description in the task command, e.g.,“pick stuffed toy whale .” Once the object description is translated into a pixel location by the VLM,the robot’s policy trained on demonstration data only needs to be capable of interpreting the masklocation and how to physically manipulate the novel object’s shape, rather than needing to alsoground the semantic object description.3.3 Architecture and Training of MOOWe present the model architecture and information flow of MOO in Fig. 2. As described above, weextract the object descriptions from the language instruction and together with the initial image feedthem into the VLM to output object locations in the image. This information is then represented asan object mask with dots at the center of the objects of interest.Once we obtain the mask, we append it channel-wise to the current image together with the recentimage history, which is passed into the RT-1 policy architecture [24]. We use a language modelto encode the previously extracted verb vpart of the language instruction in an embedding spaceof an LLM. The images are processed by an EfficientNet [47] conditioned on the text embeddingvia FiLM [48]. This is followed by a Token Learner [49] to compute a small set of tokens, andfinally a Transformer [50] to attend over these tokens and produce discretized action tokens. Werefer the reader to the RT-1 paper for details regarding the later part of the architecture. The actionspace corresponds to the 7-DoF delta end-effector pose of the arm (including x, y, z, roll, pitch, yawand gripper opening). The entire policy network is trained end-to-end using the imitation learningobjective and we specify the details of the objective in the Appendix (Equation 1). Importantly, theVLM used to detect the objects is frozen during training, so that it does not overfit to the objects inthe robot demonstration data. The policy is trained with this frozen VLM in the loop, so that thepolicy can learn to be robust to errors made by the VLM given other information in the image.43.4 Practical ImplementationTo detect objects in our robot images, we use the Owl-ViT open-vocabulary object detector [43]. Inpractice, we find that it is capable of detecting most clearly visible objects without any fine-tuning,given a descriptive natural language phrase. The interface to the detector requires a natural languagephrase describing what to detect (e.g., “An image of a small blue elephant toy.”) along with an imageto run the detection on. The output from the model is a score map indicating which locations aremost likely to correspond to the natural language description and their associated bounding boxes.We select a universal score threshold to filter detections. To detect our objects, we rely on someprompt-engineering using descriptive phrases including the color, size, and shape of objects, thoughmost of our prompts worked well on the first attempt. We share the prompts in the Appendix.To make the inference time practical on real robots, we extract the object information via VLM onlyin the first episode frame. By doing so, most of the heavy computation is executed only once atthe beginning and we can perform real-time control for the entire episode. Since the information isappended to the current observation, we rely on the policy to find the corresponding object in thecurrent image if the object has moved since the first timestep.3.5 Training DataFigure 3: (Left) RT-1 objects that account for 70% of training datacovering all skills. (Middle) Diverse training objects that appear only in“pick” demonstrations. (Right) Unseen objects used only for evaluation.We start with the demonstra-tion data used by RT-1 [24] cov-ering 16 unique objects. De-spite the use of the VLM forsemantic generalization, we ex-pect that the policy will re-quire more physical object di-versity to generalize to novel ob-jects. Therefore, we expand thedataset with additional diverse“pick” data across a set of 90diverse objects, for a total of106 objects, as shown in Fig-ure 3. We choose to only expandthe set of objects for the pickingskill, since it is the fastest skill to execute and therefore allows for the greatest amount of diversedata collection within a limited budget of demonstrator time. Our additional set of 90 diverse objectsonly appear in “pick” episodes. All other tasks, such as “move near” or “place into”, must be learnedfrom the original 16 objects in the RT-1 dataset. Detailed statistics are in Appendix Figure 9.4 ExperimentsOur experiments answer the following questions: 1) Does MOO generalize across objects for differ-ent skills including unseen objects? 2) Does MOO generalize beyond new objects – Is MOO robustto distractors, backgrounds and environments? 3) Can the intermediate representation used supportnon-linguistic modalities to specify the task? 4) Does the object generalization performance scalewith the (a) number of training episodes, (b) number of unique objects in the training episodes and(c) size of the model? 5) Can MOO be used for open-world navigation and manipulation?4.1 Experimental SetupSeen and unseen objects. The training data is collected with teleoperation on table-top environ-ments across a set of 106 different object types. We evaluate performance on a subset with 49 objects“seen” in training and report the performance as “seen”. We hold out 47 objects not present in train-ing and report performance on these as “unseen”. These 47 held-out objects are comprised of 22objects of the categories seen in training and 25 objects of unseen categories. These objects are listedin Appendix Table 1. Note that previous works often focus on unseen combinations of previouslyseen commands and objects (e.g. “pick an apple” even though the training data contains “move anapple into a bowl” and “pick a bowl”); we adopt a more strict definition of unseen objects, whereour unseen objects were not seen in the robot’s training demonstration data at any point for anytask, therefore making our unseen performance a zero-shot object generalization problem. Further-more, we report results across different environments that introduce novel textures, backgrounds,locations, and additional open-world objects not present in the training data.5Figure 4: Main Results. While baseline methods perform competitively on in-distribution combinations ofobjects and skills seen during training, they fail to generalize to novel objects. MOO substantially improvesgeneralization to novel objects, especially those in unseen categories and for the “pick” skill.Evaluation details. We evaluate on a set of tabletop tasks involving manipulating a diverse setof objects. We use mobile manipulators with 7 degree-of-freedom arm and two-fingered gripper(Appendix Figure 10). Our experiments evaluate the percent of successfully completed manipulationcommands which include five skills: “pick”, “move near”, “knock,” “place upright,” and “place into”across a set of evaluation episodes (definition and success criteria follow RT-1 [24] and are describedin the Appendix). To study object specificity and robustness, for all evaluation episodes, we includebetween two to four distractor objects in the scene which the robot should not interact with. Foreach evaluation episode, we randomly scatter the evaluation object(s) and the distractor objects ontothe table. There is no consistent placement of the target object relative to the distractors. We repeatthis process 21 times and report the performance. We present the experimental setup in AppendixFigure 10.Baselines. We compare two prior methods: RT-1 [24] and a modified version of VIMA [25], referredto as “VIMA-like”. VIMA-like preserves the cross-attention mechanism, but uses the mask imageas the prompt token and the current image as state token. These modifications are necessary becauseVIMA uses Transporter-based action space and is not applicable to our task, i.e., our robot armmoves in 6D and has a gripper that can open and close continuously. These two baselines correspondto common alternatives where the computer vision data is used as a pre-training mechanism (as inRT-1) or object-centric information is fed to the network through cross attention (as in VIMA-like).4.2 Experimental ResultsFigure 5: MOO is able to generalize to new objects, textures, andenvironments with greater success than prior methods. Visualiza-tions are shown in Figure 6.Generalization to Novel Objects.We investigate the question: DoesMOO generalize across objects fordifferent manipulation skills includ-ing objects never seen at trainingtime? Experiments are presented inFigure 4 and example trajectories areshown in Appendix Figure 12. Rel-ative to the baselines on the picktasks, MOO exhibits substantial im-provement over the seen object per-formance as well as the unseen ob-jects, which in both cases reaches50% improvement. MOO can cor-rectly utilize a VLM to find novel ob-jects and incorporate that informationmore effectively than the VIMA-likebaseline. When comparing the performance on seen objects for the skills other than pick, we observea slightly worse performance than for the pick tasks. This is understandable since the “Seen objects”for “Non-pick skills” have only been seen during the pick episodes as shown in Appendix Figure9. This demonstrates MOO’s ability to transfer the learned object generalization across the skills sothat the objects that have only been picked can now be also used for other skills. In addition, MOOexhibit generalization to unseen objects (i.e. unseen in any previous tasks, including pick) that is atthe same level as for unseen objects for the pick skill, and 50% better than baseline.6Figure 6: To study the robustness of MOO, we evaluate on (a) new environments, (b) challenging texturebackgrounds which are visually similar to unseen objects in the scene, and (c) additional open-world objects.R o ll o u t s P o in t wi t h F in g er Inp u t M a sk T ar g e t Im a g e Up l oa d C li ck o n GUI T e xt In s tru c t i o n “ p i ck y ell o w hi ghli gh t er” (a) (b) (c) (d) Figure 7: We explore using various input modalities to generate the single-pixel object representations used byMOO. (a) shows the standard mask generation process using OWL-ViT with a text instruction. (b) shows usinga VLM to generate a text caption, then fed to OWL-ViT. (c) shows an uploaded image to prompt OWL-ViT. (d)shows a user providing a ground-truth mask via a GUI.Robustness Beyond New Objects. To further test the robustness of MOO, we analyze novel evalua-tion settings with significantly increased difficulty and visual variation, which are shown in Figure 6.To reduce the number of real robot evaluations, we focus this comparison on the picking skill. Theresults are presented in Figure 5. Across these challenging evaluation scenes, MOO is significantlymore robust compared to VIMA-like [25] and RT-1 [24]. This indicates that the use of VLMs inMOO not only improves generalization to new objects that the robot has not interacted with, but alsosignificantly improves generalization to new backgrounds and environments.Input Modality Experiments. To answer our third question, we perform a number of qualitativeexperiments testing different input modalities (detailed description in the Appendix). We find thatMOO is able to generalize to masks generated from a variety of upstream input modalities, evenunder scenarios outside the training distribution including scenes with duplicate objects and clutter.As the first qualitative example, Figure 7(b) illustrates that VLM such as PaLI [51] can infer whatobject a human is pointing at, allowing OWL-ViT to generate an accurate mask of the object of in-terest. Secondly, OWL-ViT can also use visual query features instead of textual features to generatea mask, enabling images of target objects to act as conditioning for MOO, as shown in Figure 7(c).This modality is useful in cases where text-based mask generation due to ambiguity in natural lan-guage, or when target images are found in other scene contexts. We explore both the setting wheretarget images are sourced from similar scenes or from diverse internet images. Finally, we show thatMOO can interpret masks directly provided by humans via a GUI, as shown in Figure 7(d). Thisis useful in cases where both text-based and image-based mask generation is difficult, such as withduplicate or cluttered objects. MOO is robust to how upstream input masks were generated, and ourpreliminary results suggest interesting future avenues in the space of human-robot interaction.7C oW : “find t he p ep s i” M O O: “ p i ck up t he p ep s i” Figure 8: We present CoW-MOO, a system that combines an open-vocabulary object navigation by CoW [52]with open-world manipulation by MOO. Full videos are shown on the project’s website.MOO Ablations. We conduct a number of ablations to assess the impact of the size and diversityof our dataset and the scale (in terms of number of parameters) of our model. In Appendix Table 3we vary both the number of unique objects in the training set (reducing it from 106 to 53 to 16unique training objects) and the number of total training episodes (reducing it by half – from 59051training episodes to 29525) while keeping all objects in the dataset. We aim to vary these two axesindependently to determine the impact of the overall size of the dataset vs its object diversity on thefinal results. Interestingly, we find that seen object performance is not affected by reducing objectdiversity, but generalization to unseen objects is very sensitive to object diversity.Additionally, we investigate the impact of scaling model size. We train two smaller versions ofMOO where we scale down the total number of layers and the layer width by a constant factor. Theversion of MOO that we use in our main experiments has 111M parameters, which, for the purposeof this ablation, we then reduce by an order of magnitude down to 10.2M and then by 5X againdown to 2.37M. Comparing different sizes of the model, we find significant drop offs in both “seen”(from 98% to 54% and 39% respectively) and “unseen” object performance (from 79% to 50% and13%; see Appendix Figure 11 for a graph of the results). We also note that we could not make MOOlarger than 111M parameters without increasing the latency on robot to an unacceptable level, butwe expect continued performance gains with bigger models if latency requirements can be relaxed.Open-World Navigation and Manipulation. Finally, we consider how such a system can be inte-grated with open-vocabulary object-based navigation. Coincidentally, there is an open-vocabularyobject navigation algorithm called Clip on Wheels (CoW) [52]; we implement a variant of CoW andcombine it with MOO, which we refer to as CoW-MOO. CoW handles open-vocabulary navigationto an object of interest, upon which MOO continues with manipulating the target object. This com-bination enables a truly open-world task execution, where the robot is able to first find an objectit has never interacted with, and then successfully manipulate it to accomplish the task. We showexample qualitative experiments in Figure 8 and in the video of this system on the project’s website2.5 Conclusion and LimitationsIn this paper we presented MOO, an approach for leveraging the rich semantic knowledge capturedby vision-language models in robotic manipulation policies. We conduct 1;472real world evalua-tions to show that MOO allows robots to generalize to novel instructions involving novel objects,enables greater robustness to visually challenging table textures and new environments, is amenableto multiple input modalities, and can be combined with open-vocabulary semantic navigation.Despite the promising results, MOO has multiple important limitations. First, the object mask rep-resentation used by MOO may struggle in visually ambiguous cases, such as where objects areoverlapping or occluded. Second, we expect the generalization of the policy to still be limited by themotion diversity of training data. For example, we expect that the robot may struggle to grasp novelobjects with drastically different shapes or sizes than those seen in the training demonstration data,even with successful object localization. Third, instructions are currently expected to conform to aset of templates from which target objects and verbs can be easily separated. We expect this limita-tion could be lifted by leveraging an LLM to extract relevant properties from freeform instructions.Finally, MOO cannot currently handle complex object descriptions involving spatial relations, suchas “the small object to the left of the plate.” Fortunately, we expect performance on tasks such asthese to improve significantly as vision-language models continue to advance moving forward.2https://robot-moo.github.io/8References[1] N. NILLSON. Shakey the robot. SRI International, Technical Note , 323, 1984.[2] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale,M. Halpenny, G. Hoffmann, et al. Stanley: The robot that won the darpa grand challenge.Journal of Field Robotics , 23(9):661–692, 2006.[3] P. Karkus, X. Ma, D. Hsu, L. P. Kaelbling, W. S. Lee, and T. Lozano-P ́erez. Differentiablealgorithm networks for composable robot learning. Robotics: Science and Systems (RSS) ,2019.[4] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-P ́erez, and C. R. Garrett. Long-horizon ma-nipulation of unknown objects via task and motion planning with estimated affordances. InInternational Conference on Robotics and Automation (ICRA) , pages 1940–1946. IEEE, 2022.[5] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.[6] A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergiesbetween pushing and grasping with self-supervised deep reinforcement learning. In 2018IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 4238–4245. IEEE, 2018.[7] S. Parisi, A. Rajeswaran, S. Purushwalkam, and A. Gupta. The unsurprising effectiveness ofpre-trained vision models for control. International Conference on Machine Learning (ICML) ,2022.[8] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. Conference on Robot Learning (CoRL) , 2022.[9] S. Nair, E. Mitchell, K. Chen, B. Ichter, S. Savarese, and C. Finn. Learning language-conditioned robot behavior from offline data and crowd-sourced annotation. In Conferenceon Robot Learning (CoRL) , pages 1303–1315, 2021.[10] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-Z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning(CoRL) , pages 991–1002, 2021.[11] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. Conference on Robot Learning (CoRL) , 2022.[12] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. Conference on Robot Learning (CoRL) , 2022.[13] S. Teller, M. R. Walter, M. Antone, A. Correa, R. Davis, L. Fletcher, E. Frazzoli, J. Glass,J. P. How, A. S. Huang, et al. A voice-commandable robotic forklift working alongside hu-mans in minimally-prepared outdoor environments. In 2010 IEEE International Conferenceon Robotics and Automation , pages 526–533. IEEE, 2010.[14] Y . Xiang, T. Schmidt, V . Narayanan, and D. Fox. PoseCNN: A convolutional neural networkfor 6d object pose estimation in cluttered scenes. Robotics: Science and Systems (RSS) , 2018.[15] S. Kumra and C. Kanan. Robotic grasp detection using deep convolutional neural networks.In2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages769–776. IEEE, 2017.[16] B. Zhou, P. Kr ̈ahenb ̈uhl, and V . Koltun. Does computer vision matter for action? ScienceRobotics , 4(30), 2019.[17] L. Yen-Chen, A. Zeng, S. Song, P. Isola, and T.-Y . Lin. Learning to see before learning toact: Visual pre-training for manipulation. In IEEE International Conference on Robotics andAutomation (ICRA) , pages 7286–7293. IEEE, 2020.9[18] B. Chen, A. Sax, G. Lewis, I. Armeni, S. Savarese, A. Zamir, J. Malik, and L. Pinto. Ro-bust policies via mid-level visual representations: An experimental study in manipulation andnavigation. Conference on Robot Learning (CoRL) , 2020.[19] R. Shah and V . Kumar. Rrl: Resnet as representation for reinforcement learning. InternationalConference on Machine Learning (ICML) , 2021.[20] I. Radosavovic, T. Xiao, S. James, P. Abbeel, J. Malik, and T. Darrell. Real-world robotlearning with masked visual pre-training. Conference on Robot Learning (CoRL) , 2022.[21] Y . J. Ma, S. Sodhani, D. Jayaraman, O. Bastani, V . Kumar, and A. Zhang. VIP: Towards univer-sal visual reward and representation via value-implicit pre-training. International Conferenceon Learning Representations (ICLR) , 2023.[22] F. Hill, S. Mokra, N. Wong, and T. Harley. Human instruction-following with deep reinforce-ment learning via transfer-learning from text. arXiv preprint arXiv:2005.09382 , 2020.[23] C. Lynch and P. Sermanet. Grounding language in play. Robotics: Science and Systems (RSS) ,2021.[24] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. RT-1: Robotics transformer for real-world control at scale.Robotics: Science and Systems (RSS) , 2023.[25] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. VIMA: General robot manipulation with multimodal prompts. InternationalConference on Machine Learning (ICML) , 2023.[26] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. B. Amor. Language-conditioned imitation learning for robot manipulation tasks. Neural Information ProcessingSystems (NeurIPS) , 2020.[27] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[28] H. Liu, L. Lee, K. Lee, and P. Abbeel. Instruction-following agents with jointly pre-trainedvision-language models. arXiv preprint arXiv:2210.13431 , 2022.[29] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700robot hours. In IEEE international conference on robotics and automation (ICRA) , 2016.[30] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg. Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic graspmetrics. Robotics: Science and Systems (RSS) , 2017.[31] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordinationfor robotic grasping with deep learning and large-scale data collection. The InternationalJournal of Robotics Research , 37(4-5), 2018.[32] C. Finn and S. Levine. Deep visual foresight for planning robot motion. In IEEE InternationalConference on Robotics and Automation (ICRA) , 2017.[33] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Scalable deep reinforcement learning for vision-basedrobotic manipulation. In Conference on Robot Learning , pages 651–673, 2018.[34] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. In Conference on Robot Learning , 2019.[35] S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto. Visual imitation madeeasy. In Conference on Robot Learning (CoRL) , 2020.10[36] Y . Chebotar, K. Hausman, Y . Lu, T. Xiao, D. Kalashnikov, J. Varley, A. Irpan, B. Eysenbach,R. Julian, C. Finn, et al. Actionable models: Unsupervised offline reinforcement learning ofrobotic skills. International Conference on Machine Learning (ICML) , 2021.[37] B.-H. Wu, S. Nair, R. Mart ́ın-Mart ́ın, L. Fei-Fei, and C. Finn. Greedy hierarchical variationalautoencoders for large-scale video prediction. IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR) , 2021.[38] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , 2022.[39] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate objectdetection and semantic segmentation. In IEEE Conference on Computer Vision and PatternRecognition , pages 580–587, 2014.[40] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , 2016.[41] K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick. Mask r-cnn. In IEEE International Conferenceon Computer Vision (ICCV) , pages 2980–2988, 2017.[42] T.-Y . Lin, P. Goyal, R. Girshick, K. He, and P. Doll ́ar. Focal loss for dense object detection. In2017 IEEE International Conference on Computer Vision (ICCV) , pages 2999–3007, 2017.[43] M. Minderer, A. A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy,A. Mahendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby.Simple open-vocabulary object detection with vision transformers. European Conference onComputer Vision (ECCV) , 2022.[44] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision andlanguage knowledge distillation. In International Conference on Learning Representations(ICLR) , 2022.[45] A. Kamath, M. Singh, Y . LeCun, G. Synnaeve, I. Misra, and N. Carion. Mdetr - modulateddetection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision (ICCV) , pages 1780–1790, October 2021.[46] Y . Zhong, J. Yang, P. Zhang, C. Li, N. Codella, L. H. Li, L. Zhou, X. Dai, L. Yuan, Y . Li, andJ. Gao. Regionclip: Region-based language-image pretraining. In IEEE/CVF Conference onComputer Vision and Pattern Recognition (CVPR) , pages 16793–16803, June 2022.[47] M. Tan and Q. Le. Efficientnet: Rethinking model scaling for convolutional neural networks.InInternational Conference on Machine Learning (ICML) , pages 6105–6114, 2019.[48] E. Perez, F. Strub, H. De Vries, V . Dumoulin, and A. Courville. Film: Visual reasoning with ageneral conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 32, 2018.[49] M. S. Ryoo, A. Piergiovanni, A. Arnab, M. Dehghani, and A. Angelova. Tokenlearner: Whatcan 8 learned tokens do for images and videos? Neural Information Processing Systems(NeurIPS) , 2021.[50] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[51] X. Chen, X. Wang, S. Changpinyo, A. Piergiovanni, P. Padlewski, D. Salz, S. Goodman,A. Grycner, B. Mustafa, L. Beyer, et al. Pali: A jointly-scaled multilingual language-imagemodel. International Conference on Learning Representations (ICLR) , 2023.[52] S. Y . Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Cows on pasture: Baselinesand benchmarks for language-driven zero-shot object navigation. Computer Vision and PatternRecognition (CVPR) , 2023.[53] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.11AppendixImitation Learning and RT-1MOO builds upon a language-conditioned imitation learning setup. The goal of language-conditioned imitation learning is to learn a policy (aj`;o), whereais a robot action that should beapplied given the current observation oand task instruction `. To learn a language-conditioned policy, we build on top of RT-1 [24], a recent robotics transformer-based model that achieves high lev-els of performance across a wide variety of manipulation tasks. RT-1 uses behavioral cloning [53],which optimizes by minimizing the negative log-likelihood of an action agiven the image ob-servations seen so far in the trajectory and the language instruction, using a demonstration datasetcontainingNdemonstrations:J() :=NXn=1T(n)Xt=1log(a(n)tj`(n);fo(n)jgtj=1): (1)Vision-Language ModelsIn recent years, there has been a growing interest in developing models that can detect objectsin images based on natural language queries. These models, known as vision-language models(VLMs), are enabling detectors to identify a wide range of objects based on natural language queries.Typically the text queries are tokenized and embedded in a high-dimensional space by a pre-trainedlanguage encoder, and the image is processed by a separate network to extract image features intothe same embedding space as the text features. The language and image representations are thencombined to make predictions of the bounding boxes and segmentation masks. Given a naturallanguage query, q, and an image observation on which to run detection, o, these models aim toproduce a set of embeddings for the image fi(o)with shape (height;width;feature dim )and anembedding of the language query fl(q)with shape feature dim such that logits =fi(o)fl(q)givesa logit score map and is maximized at regions in owhich correspond to the queries in q. Eachimage embedding location within fi(o)is also associated with a predicted bounding box or maskindicating the spatial extent of that object corresponding to fi(o). In this work, we use the Owl-ViTdetector [43], which we discuss further in Sec. 3.4.DatasetsWe collect a teleoperated demonstration data that focuses on increasing object diversity for the mostefficient skill to collect data for, the picking task. This dataset of 13,239 episodes was collectedwith a similar procedure to [24], with expert users utilizing Oculus Virtual Reality controllers forteleoperation. Detailed dataset statistics are in Figure 9 and Table 1.12ObjectIncluded inTrainingIncluded in EvaluationSeen Object Unseen Object,Seen CategoryUnseenCategoryred grapefruit can yescoke zero can yespineapple spindrift can yeslemon spindrift can yeslove kombucha yesoriginal pepper can yesfruit gummies yesinstant oatmeal pack yesbrie cheese cup yescoffee mixing stick yeswhite sparkling can yesdiet pepper can yeslemon sparkling water can yesblack pen yesorange plastic bottle yesblue pen yescoffee cup sleeve yesregular 7up can yessmall salmon plate yesdiet coke can yeslemonade plastic bottle yesoriginal redbull can yesnumi tea bag yespopcorn chip bag yescereal scoop yesblackberry hint water yesgreen cookies bag yeswatermelon hint water yesspoon yescoffee cup lid yesgreen pear yescoffee cup yesiced tea can yesito en green tea yespink lunch box yeschocolate caramel candy yessmall beige plate yeslarge yellow spatula yeslarge hot pink plate yesred bowl yesgreen bowl yesorange spatula yeslarge blue plate yeslarge baby pink plate yessmall purple plate yessmall blue spatula yessmall green plate yestable tennis paddle yesgreen brush yesrubiks cube yesgray suction toy yestoy ball with holes yeslarge tennis ball yesgray microfiber cloth yestoy boat train yes13teal and pink toy car yesdna chew toy yesslinky toy yesraspberry baby teether yessmall purple spatula yesmilano dark chocolate yesbadminton shuttlecock yeschain link toy yesorange cup yeshead massager yessquare cheese yesboiled egg yesblue cup yeschew toy yes yesfish toy yes yesegg separator yes yesblue microfiber cloth yes yesyellow pear yes yessmall orange rolling pin yes yeswrist watch yes yespretzel chip bag yes yesdisinfectant wipes yes yespickle snack yes yesoctopus toy yes yescatnip toy yes yesorange yes yes7up can yes yesapple yes yescoke can yes yesswedish fish bag yes yeslarge green rolling pin yes yesplace green can upright yes yesblack sunglasses yes yesblue chip bag yes yespepsi can yes yespink shoe yes yesblue plastic bottle yes yesgreen can yes yesorange can yes yeswater bottle yes yesredbull can yes yesgreen jalapeno chip bag yes yesrxbar chocolate yes yesrxbar blueberry yes yesbrown chip bag yes yesgreen rice chip bag yes yessponge yes yeschocolate peanut candy yes yesbanana yes yesoreo yes yescheese stick yes yesyellow bowl yes yeslarge green plate yes yeswhite coat hanger yes yesgreen microfiber cloth yes yessmall blending bottle yes yesfloral shoe yes yesdog rope toy yes yesred cup yes yes14fork yes yesdisinfectant pump yes yesblue balloon yes yesbird ornament yesred plastic shovel yeswhisk yesbaby toy yesbrown dinosaur toy yespikmi pops confetti toy yeswhite marker holder yeswhite toilet scrub yespink stapler yesgreen dolphin toy yespurple eggplant yessmall green dinosaur toy yesgreen blocks yesnavy toy gun yesgray pouch yessmall red motorcycle toy yesbike pedal yesc clamp yesburgundy paint brush yestransparent hint water bottle yesshiny steel coffee grinder holder yestransparent plastic cup yesshiny steel mug yesshiny steel scooper yesshiny pink steel bowl yeslight pink sunglasses yespink marker yescold brew can yesginger lemon kombucha yesgreen cup yesgreen sprite can yeslarge orange plate yespineapple hint water yessmall blue plate yessmall hot pink plate yesblack small duck yessmall purple bowl yespurple toy boat yesblack chip bag yesteal sea salt chip bag yesblue sea salt chip bag yessea salt seaweed snack yesgray sponge yesred velvet snack bar yesred bell pepper yesblue toy boat train yesgreen tennis ball yesTable 1: List of objects used in training and evaluation. There are 3 types of objects used in evaluation: 49objects which were seen in training, 22 unseen objects of categories which were seen in training, 25 unseenobjects of unseen categories.15Figure 9: Distribution of training objects across the training dataset. We augmented RT-1 data (on the left)with a large number of diverse pick episodes (in the middle) in order to demonstrate strong generalization tounseen objects (on the right). Blue and yellow bars represent “pick” episodes and red bars represent other taskslike “move near” or “knock.” Yellow bars portray the objects randomly selected for “Seen Object” evaluations.Objects for “Unseen Category” and “Unseen Object, Seen Category” evaluations are shown to the right.ExperimentsWe show a visualization of our 7-DoF manipulation robot in Figure 10.Skills. Our experiments evaluate the percent of successfully completed manipulation commandswhich include five skills: “pick”, “move near”, “knock,” “place upright,” and “place into” acrossa set of evaluation episodes. The definition of the tasks follows RT-1 [24]: For “pick”, success isdefined as (1) grasping the specified object and (2) lifting the object at least 6 inches from the tabletop. For “move near”, success is defined as (1) grasping the specified object and (2) placing it within6 inches of the specified target object. For “knock”, success is defined as placing the specified objectfrom an “upright” position onto its side. “Place upright” tasks are the inverse of “knock” and involveplacing an object from its side into an upright position. Finally, “place into” tasks involve placingone object into another, such as an apple into a bowl.Robustness evaluation details. We evaluate the robustness of MOO on a variety of visually chal-lenging scenarios with drastically different furniture and backgrounds, as shown in Figure 6; theresults are reported in Figure 5. The first set of these difficult evaluation scenes introduces six evalu-ations across five additional open-world objects that correspond to various household items that havenot been seen at any point during training. The second set of difficult scenes introduces 14 evalu-ations across two patterned tablecloths; these tablecloth textures are significantly more challengingthan the plain gray counter-tops seen in the training demonstration dataset. Finally, the last set ofdifficult scenes include 14 evaluations across three new environments in natural kitchen and officespaces that were never present training. These new scenes simultaneously change the counter-topmaterials, backgrounds, lighting conditions, and distractor items.Input modality demonstration details. We explore the ability of MOO to incorporate object-centric mask representations that are generated via different processes than the one used duringtraining. During training, an OWL-ViT generates mask visual representations from textual prompts,as described in Section 3.2. We study whether MOO can successfully accomplish manipulationtasks given (1) a mask generated from a text caption from a generative VLM, (2) a mask generatedfrom an image query instead of a text query, or (3) a mask directly provided by a human via aGUI. For each of these cases, we implement different procedures for generating the object maskrepresentation, which are then fed to the frozen MOO policy.16Figure 10: Image of our robot hardware and evaluation setting.Mask Sensitivity Study We originally noted the qualitative observation that MOO seemed to ex-hibit robustness to imperfect object-centric mask localization. We ran an additional experiment ofevaluating a MOO policy with masks that contain artificially added localization noise. We studyfour cases where we ablate the mask while keeping the starting scene the same: (1) the baselinewhen the mask is at the centroid of the object, (2) when the mask is still on the object of interestbut not at the centroid, (3) when the mask is off of the object entirely but still within roughly 5cm,and (4) when the mask is far and more than 5cm from the object of interest. We add the artificialnoise manually by starting with the centroid mask and then manually ablating the masks. We runa total of 20 trials across 5 tasks involving one seen object (green rolling pin), one unseen objectin a seen category (cold brew can), and three unseen object categories (egg plant, shiny sunglasses,transparent bottle). We find that performance degrades as more noise is added: case (1) achieves 5/5successes, case (2) achieves 4/5 successes, case (3) achieves 3/5 successes, and case (4) achieves 3/5successes. Qualitatively, we observe that the policy sometimes initially reaches for the inaccuratemask location, and is sometimes able to recover and re-approach the correct target object. In onenotable example, the policy faced an off-center mask on the lower part of the clear bottle (a visuallychallenging object, since no transparent objects were in the training set) and grasped the clear bottlein the upper section, which is the correct strategy for re-orienting the bottle upright. Additionally,a few examples showed that the policy was able to retry after failures caused by inaccurate masks;this suggest that the policy does not just memorize going to the location of the mask, but insteadalso pays attention to semantics. We provide the table of quantitative results in Table 2.Training data ablation. We ablate the amount of data used to train MOO, and find that both datadiversity and data scale are important, as shown in Table 3.Prompts usedWe use the following prompts to OWL-ViT detect our objects. All prompts were prefixed with thephrase “An image of a”.17Mask Ablation Amount Pick Skill Upright Skill Knock SkillGreen Rolling Pin Eggplant Shiny Glasses Clear Bottle Cold Brew Canbaseline centroid mask 1 1 1 1 1off-center on-object mask 0 1 1 1 1off-object less than 5cm mask 1 1 1 0 0off-object more than 5cm mask 1 0 1 0 1Table 2: We evaluate a MOO policy for 20 trials with masks that contain artificially added localization noise.Notably, we find that MOO is able to recover from misspecified masks which may not be centered on the targetobject, for both seen objects and novel objects altogether.Dataset Filtering PickObjects Episodes per Object Seen objects Unseen objects100% 100% 98 7950% 100% 92 7518% 100% 88 19100% 50% 46 38100% 10% 23 0Table 3: Performance of MOO in percentage of success relative to the amount of data used for training. Bothdata scale and data diversity are important.7up can!“white can of soda”banana!“banana”black pen!“black pen”blue chip bag!“blue bag of chips”blue pen!“blue pen”brown chip bag!“brown bag of chips”cereal scoop!“cereal scoop”chocolate peanut candy !“bag of candy snack”coffee cup!“coffee cup”coke can!“red can of soda”coke zero can!“can of soda”disinfectant pump!“bottle”fork!“fork”green can!“green aluminum can”green cookies bag!“green snack food bag”green jalapeno chip bag !“green bag of chips”green sprite can!“green soda can”knife!“knife”orange can!“orange aluminum can”orange plastic bottle !“orange bottle”oreo!“cookie snack food bag”pepsi can!“blue soda can”popcorn chip bag!“bag of chips”pretzel chip bag!“bag of chips”red grapefruit can!“red aluminum can”redbull can!“skinny silver can of soda”rxbar blueberry!“small blue rectangular snack food bar”spoon!“spoon”swedish fish bag!“bag of candy snack food”water bottle!“clear plastic waterbottle with white cap”white sparkling can !“aluminum can”blue plastic bottle!“clear plastic waterbottle with white cap”diet pepper can!“can of soda”disinfectant wipes!“yellow and blue pack”green rice chip bag !“green bag of chips”orange!“round orange fruit”paper bowl!“round bowl”rxbar chocolate!“small black rectangular snack food bar”sponge!“scrub sponge”blackberry hint water !“clear plastic bottle with white cap”pineapple hint water !“clear plastic bottle with white cap”18Figure 11: Pick success vs. model size. We see continuous improvements on both seen and unseen objectsas we increase the number of parameters of our model architecture while keeping the data set size fixed. Incomparison to our main model, we scaled down layer widths and depth by the same constant multiplier. Weexpect more performance gains at larger model capacity, yet are currently unable to scale further due to realtime inference constraints on our robot.watermelon hint water !“clear plastic bottle with white cap”regular 7up can!“can of soda”lemonade plastic bottle !“clear plastic bottle with white cap”diet coke can!“silver can of soda”yellow pear!“yellow pear”green pear!“green pear”instant oatmeal pack !“flat brown pack of instant oatmeal”coffee mixing stick !“small thin flat wooden popsicle stick”coffee cup lid!“round disposable coffee cup lid”coffee cup sleeve!“brown disposable coffee cup sleeve”numi tea bag!“small flat packet of tea”fruit gummies!“small blue bag of snacks”chocolate caramel candy !“small navy bag of candy”original redbull can !“can of energy drink with dark blue label”cold brew can!“blue and black can”ginger lemon kombucha !“yellow and tan aluminum can with brown writing”large orange plate!“circular orange plate”small blue plate!“circular blue plate”love kombucha!“white and orange can of soda”original pepper can !“dark red can of soda”ito en green tea!“light green can of soda”iced tea can!“black can of soda”cheese stick!“yellow cheese stick in wrapper”brie cheese cup!“small white cheese cup with wrapper”pineapple spindrift can !“white and cyan can of soda”lemon spindrift can !“white and brown can of soda”lemon sparkling water can !“yellow can of soda”milano dark chocolate !“white pack of snacks”square cheese!“small orange rectangle packet ”boiled egg!“small white egg in a plastic wrapper”pickle snack!“small black and green snack bag”19move cold brew can near green cup pick disinfectant wipes move small blue plate near whisk pick wrist watch move bird ornament near whisk pick small orange rolling pin Figure 12: Example images of our policy detecting and grasping objects not seen during training time. Theobject detections are colored in correspondence to the text above the image, and the images are ordered left toright across time.red cup!“plastic red cup”blue cup!“plastic blue cup”orange cup!“plastic orange cup”green cup!“plastic green cup”head massager!“metal head massager with many wires”chew toy!“blue and yellow toy with orange polka dots”wrist watch!“wrist watch”small orange rolling pin !“small orange rolling pin with wooden handles”large green rolling pin !“large green rolling pin with wooden handles”rubiks cube!“rubiks cube”blue microfiber cloth !“blue cloth”gray microfiber cloth !“gray cloth”green microfiber cloth !“green cloth”small blending bottle !“small turqoise and brown bottle”large tennis ball!“large tennis ball”table tennis paddle !“table tennis paddle”octopus toy!“purple toy octopus”pink shoe!“pink shoe”floral shoe!“red and blue shoe”whisk!“whisk”orange spatula!“orange spatula”20small blue spatula!“small blue spatula”large yellow spatula !“large yellow spatula”egg separator!“large pink cooking spoon”green brush!“green brush”small purple spatula !“small purple spatula”badminton shuttlecock !“shuttlecock”black sunglasses!“black sunglasses”toy ball with holes!“toy ball with holes”red plastic shovel!“red plastic shovel”bird ornament!“colorful ornament with blue and yellow confetti”blue balloon!“blue balloon animal”catnip toy!“small dark blue plastic cross toy”raspberry baby teether !“red and green baby pacifier”slinky toy!“gray metallic cylinder slinky”dna chew toy!“big orange spring”gray suction toy!“gray suction toy”teal and pink toy car !“teal and pink toy car”two pound purple dumbbell !“purple dumbbell”one pound pink dumbbell !“pink dumbbell”three pound brown dumbbell !“brown dumbbell”dog rope toy!“white pink and gray rope with knot”fish toy!“fish”chain link toy!“skinny green rectangular toy”toy boat train!“plastic toy boat”white coat hanger!“white coat hanger”21 |
b-cto-fetlz | HomeRobot: Open-Vocabulary Mobile ManipulationSriram Yenamandra∗1Arun Ramachandran∗1Karmesh Yadav∗1,2Austin Wang1Mukul Khanna1Theophile Gervet2,3Tsung-Yen Yang2Vidhi Jain3Alexander William Clegg2John Turner2Zsolt Kira1Manolis Savva4Angel Chang4Devendra Singh Chaplot2Dhruv Batra1,2Roozbeh Mottaghi2Yonatan Bisk2,3Chris Paxton21Georgia Tech2FAIR, Meta AI3Carnegie Mellon4Simon Fraserhomerobot-info@googlegroups.comAbstract: HomeRobot (noun ): An affordable compliant robot that navigateshomes and manipulates a wide range of objects in order to complete everyday tasks.Open-V ocabulary Mobile Manipulation (OVMM) is the problem of picking anyobject in anyunseen environment, and placing it in a commanded location. This isa foundational challenge for robots to be useful assistants in human environments,because it involves tackling sub-problems from across robotics: perception,language understanding, navigation, and manipulation are all essential to OVMM.In addition, integration of the solutions to these sub-problems poses its ownsubstantial challenges. To drive research in this area, we introduce the HomeRobotOVMM benchmark, where an agent navigates household environments tograsp novel objects and place them on target receptacles. HomeRobot has twocomponents: a simulation component, which uses a large and diverse curatedobject set in new, high-quality multi-room home environments; and a real-worldcomponent, providing a software stack for the low-cost Hello Robot Stretch toencourage replication of real-world experiments across labs. We implement bothreinforcement learning and heuristic (model-based) baselines and show evidenceof sim-to-real transfer of the nav and place skills. Our baselines achieve a 20%success rate in the real world; our experiments identify ways future work canimprove performance. See videos on our website: https://ovmm.github.io/.Keywords: Sim-to-real, benchmarking robot learning, mobile manipulation1 IntroductionThe aspiration to develop household robotic assistants has served as a north star for roboticists sincethe beginning of the field. The pursuit of this vision has spawned multiple areas of research withinrobotics from vision to manipulation, and has led to increasingly complex tasks and benchmarks.A useful household assistant requires creating a capable mobile manipulator that understands a widevariety of objects, how to interact with the environment, and how to intelligently explore a worldwith limited sensing. This has separately motivated research in diverse areas like navigation [ 1,2],service robotics [ 3–5], language understanding [ 6,7] and task and motion planning [ 8]. We referto this guiding problem as Open-Vocabulary Mobile Manipulation (OVMM): a useful robot willbe able to find and move arbitrary objects from place to place in an arbitrary home.Prior work does not tackle mobile manipulation in large, continuous, real-world environments. Instead,it generally simplifies the setting significantly, e.g. by using discrete action spaces, limited objectsets, or small, single-room environments that are easily explored. However, recent developmentstying language and vision have enabled robots to generalize beyond specific categories [ 9–13],often through multi-modal models such as CLIP [ 14]. Further, comparison across methods hasremained difficult and reproduction of results across labs impossible, since many aspects of the7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.ChairToy AnimalTableFind Object on Start ReceptaclePick Object from Start ReceptacleFind Goal ReceptaclePlace Object on Goal ReceptacleDrawerPitcherServing CartMove toy animal from chair to tableMove pitcher from drawer to serving cartSIMREALFigure 1: Open-V ocabulary Mobile Manipulation requires agents to search for a previously unseenobject at a particular location, and move it to the correct receptacle.settings (environments, and robots) have not been standardized. This is especially important now, asa new wave of research projects have begun to show promising results in complex, open-vocabularynavigation [ 9,15,11,12,16] and manipulation [ 17,10,18] – again on a wide range of robots andsettings, and still limited to single-room environments. Clearly, now is the time when we need acommon platform and benchmarks to drive the field forward.In this work, we define Open-V ocabulary Mobile Manipulation as a key task for in-home robotics andprovide benchmarks and infrastructure, both in simulation and the real world, to build and evaluatefull-stack integrated mobile manipulation systems, in a wide variety of human-centric environments,with open object sets. Our benchmark will further reproducible research in this setting, and the factthat we support arbitrary objects will enable the results to be deployed in a variety of real-worldenvironments.OVMM: We propose the first reproducible mobile-manipulation benchmark for the real world,with an associated simulation component. In simulation, we use a dataset of 200human-authoredinteractive 3D scenes [ 19] instantiated in the AI Habitat simulator [ 20,21] to create a large numberof challenging, multi-room OVMM problems with a wide variety of objects curated from a variety ofsources. Some of these objects’ categories have been seen during training; others have not. In the realworld, we create an equivalent benchmark, also with a mix of seen and unseen object categories, in acontrolled apartment environment. We use the Hello Robot Stretch [ 22]: an affordable and compliantplatform for household and social robotics that is already in use at over 40 universities and industryresearch labs. Fig. 1 shows instantiations of our OVMM task in both the real-world benchmark andin simulation. We have a controlled real-world test environment, and plan to run the real-worldbenchmark yearly to assess progress on this challenging problem. Real-world benchmarking will berun as a part of the NeurIPS 2023 HomeRobot OVMM competition [23].HomeRobot: We also propose HomeRobot,1a software framework to facilitate extensive bench-marking in both simulated and physical environments. It comprises identical APIs that are imple-mented across both settings, enabling researchers to conduct experiments that can be replicated inboth simulated and real-world environments. Table 1 compares HomeRobot OVMM to the literature.Notably, HomeRobot provides a robotics stack for the Hello Robot Stretch which supports a rangeof capabilities in both simulation and the real world, and is not restricted to just the OVMM task.Our library also supports a number of sub-tasks, including manipulation learning [ 24], continuouslearning [25], navigation [26], and object-goal navigation [2].1https://github.com/facebookresearch/home-robot2Object Continuous Robotics OpenScenes Cats Inst. Actions Sim2Real Stack Licensing ManipulationRoom Rearrangement [28] 120 118 118 ✖ ✖ ✖ ✔ ✖Habitat ObjectNav Challenge[29] 216 6 7,599 ✔ ✖ ✖ ✔ ✖TDW-Transport [30] 15 50 112 ✖ ✖ ✖ ✓ ✓VirtualHome [31] 6 308 1,066 ✖ ✖ ✖ ✔ ✓ALFRED [6] 120 84 84 ✖ ✖ ✖ ✔ ✓Habitat 2.0 HAB [21] 105 20 20 ✔ ✖ ✖ ✔ ✔ProcTHOR [32] 10,000 108 1,633 ✖ ✖ ✖ ✔ ✔RoboTHOR [33] 75 43 731 ✖ ✔ ✖ ✔ ✖Behavior-1K [34] 50 1,265 5,215 ✔ ✔ ✖ ✖ ✓ManiSkill-2 [35] 1 2,000 2,000 ✔ ✓ ✖ ✓ ✔OVMM + HomeRobot 200 150 7,892 ✔ ✔ ✔ ✔ ✔Table 1: Comparisons of our proposed benchmark with prior work. We provide a large number ofenvironments and unique objects, focusing on manipulable objects, with a continuous action space.Uniquely, we also provide a multi-purpose, real-world robotics stack, with demonstrated sim-to-realcapabilities, allowing others to reproduce and deploy their own solutions. Additional nuances infootnote3.✓Partial availability ✖Not available ✔Capability availableIn this paper, we use HomeRobot to compare two families of approaches: a heuristic solution, usinga motion planner shown to work for real-world object search [ 2], and a reinforcement learning (RL)solution, which learns how to navigate to objects given depth and predicted object segmentation.We use the open-vocabulary object detector DETIC [ 27] to provide object segmentation for boththe heuristic and RL policies. We observe that while the RL methods moved to the object moreefficiently if an object was visible, the heuristic planner was better at long-horizon exploration.We also see a substantial drop in performance when switching from ground-truth segmentation toDETIC segmentation. This highlights the importance of the HomeRobot OVMM challenge, as onlythrough viewing the problem holistically - integrating perception, planning, and action - can we buildgeneral-purpose home assistants.To summarize, in this paper, we define Open-V ocabulary Mobile Manipulation as a new, crucialtask for the robotics community in Sec. 3. We provide a new simulation environment, with multiple,multi-room interactive environments and a wide range of objects. We implement a robotics librarycalled HomeRobot which provides baseline policies implementing this in both the simulation and thereal world. We describe a real-world benchmark in a controlled environment, and show how currentbaselines perform in simulation and in the real world under different conditions. We plan to initiallyrun this real-world benchmark as a Neurips 2023 competition [23].2 Related WorkWe discuss work related to challenges and reproducibility of robotics research in more detail, butcontinue the discussion of datasets and simulators in Appendix A.Challenges. There have been several challenges aiming to benchmark robotic systems at differenttasks. These challenges provided a great testbed for ranking different systems. However, in mostof the challenges (e.g., [ 36–39,3]), the participants create their own robotic platform making a faircomparison of the algorithms difficult. There are also challenges where the organizers provide therobotic platform to the participants (e.g., [ 40]). However, changing the task during the periodicevaluations made it difficult to track progress over time. Our aim is to have a real world benchmarkusing a standard hardware that is sustainable at least for a few years.Reproducibility of robotics research. Standardized robotics benchmarks have been pursued for along time, often by open-sourcing robot designs or introducing low-cost robots [ 41–49]. However, theenvironments in which these robots are used vary dramatically, leading to evaluation of components(e.g., object navigation, SLAM) in isolation, instead of as components of a larger system that3ALFRED uses object masks for interaction. ObjectNav uses scans, not full object meshes. ProcThor scenesare procedurally generated, this has the benefit that the potential number of environments is unbounded.3REALSIMFigure 2: A low-cost home robot performing tasks in both a simulated and a real-world environment.We provide both (1) challenging simulated tasks, wherein a mobile manipulator robot must find andgrasp multiple seen and unseen objects, and (2) a corresponding real-world robotics stack to allowothers to reproduce this research and evaluation to produce useful home robot assistants.may not benefit from those changes. The HomeRobot stack enables end-to-end benchmarking ofindividual components by providing a full robotics stack, with multiple implementations of differentsub-modules. The simplicity helps move beyond standardized sets of objects (e.g., [ 50–52]) to acommon set of robots, objects, and environments. Ours is the only benchmark to provide a broadlycapable robotics stack for implementing and sharing robotics code; this is similar to projects likePyRobot [53], which doesn’t also provide a strong simulation benchmark.Real World Benchmarks. RoboTHOR [ 33] provides a common set of scenes and objects forbenchmarking navigation. RB2 [ 54] ranks different manipulation algorithms in a local setting.TOTO [ 55] takes a step further by providing a training dataset and running the experiments forthe users. However, training and testing happen in the same environments and are limited totabletop manipulation. Finally, the NIST Task Board [ 56] is a successful challenge for fine-grainedmanipulation skills [ 57], also limited to a tabletop context. Kadian et al. [ 58] propose the Habitat-PyRobot bridge (HaPy) to allow real-world testing on the locobot robot; their framework is limitedto navigation, and doesn’t provide a generally-useful robotics stack with visualizations, debugging,motion planners, tooling, etc.3 Open-Vocabulary Mobile ManipulationFormally, our task is set up as instructions of the form: “Move ( object ) from the(start_receptacle ) to the ( goal_receptacle ).” The object is a small and manipulablehousehold object (e.g., a cup, stuffed toy, or box). By contrast, start_receptacle andgoal_receptacle are large pieces of furniture, which have surfaces upon which objects can beplaced. The robot is placed in an unknown single-floor home environment - such as an apartment - andmust, given the language names of start_receptacle ,object , and goal_receptacle , pick upanobject that is known to be on a start_receptacle and move it to any valid goal_receptacle .start_receptacle is always available, to help agents know where to look for the object .The agent is successful if the specified object is indeed moved from a start_receptacle onwhich it began the episode, to any valid goal_receptacle . We give partial credit for each stepthe robot accomplishes: finding the start_receptacle with the object , picking up the object ,finding the goal_receptacle , and placing the object on the goal_receptacle . There can bemultiple valid objects that satisfy each query.Crucially, we need and develop both (1) a simulation version of the Open-V ocabulary MobileManipulation problem, for reproducibility, training, and fast iteration, and (2) a real-robot stack witha corresponding real-world benchmark. We compare the two in Fig. 2. Our simulated environmentsallow for varied, long-horizon task experimentation; our real-world HomeRobot stack allows for4experimenting with real data, and we design a set of real-world tests to evaluate the performanceof our learned and heuristic baselines.The Robot. We use the Hello Robot Stretch [ 22] with DexWrist as the mobile manipulation platform,because it (1) is relatively affordable at $ 25,000USD, (2) offers 6 DoF manipulation, and (3) ishuman safe and human-sized, making it safe to test in labs [ 24,11] and homes [ 2], and can reachmost places a human would expect a robot to go. For a breakdown of hardware choices, see Sec. H.1.Objects. These are split into seen vs.unseen categories andinstances . In particular, at test time welook at unseen instances of seen or unseen categories; i.e. no seen manipulable object from trainingappears during evaluation. Agents must pick and place any requested object.Receptacles. We include common household receptacles (e.g. tables, chairs, sofas) in our dataset;unlike with manipulable objects, all possible receptacle categories are seen during training.Scenes. We have both a simulated scene dataset and a fixed set of real-world scenes with specificfurniture arrangements and objects. In both simulated and real scenes, we use a mixture of objectsfrom previously-seen categories, and objects from unseen categories as the goal object for ourOpen-V ocabulary Mobile Manipulation task. We hold out validation andtestscenes, which do notappear in the training data; while some receptacles may re-appear, they will be at previously unseenlocations, and target object instances will be unseen.Scoring. We compute success for each stage: finding object onstart_receptacle , successfullypicking up object , finding goal_receptacle , and placing object on the goal. Overall successis true if all four stages were accomplished. We compute partial success as a tie-breaker, in whichagents receive 1 point for each successive stage accomplished, normalized by the number of stages.More details in Appendix C.3.1 Simulation DatasetFigure 3: HSSD scenes.The Habitat Synthetic Scenes Dataset (HSSD) [ 19] consists of 200+human-authored 3D home scenes containing over 18k 3D models ofreal-world objects. Like most real houses, these scenes are clutteredwith furniture and other objects placed into realistic architecturallayouts, making navigation and manipulation similarly difficult tothe real world. We used a subset of HSSD [ 19] consisting of 60scenes for which additional metadata and simulation structures wereauthored to support rearrangement4. For our experiments, these aredivided into train, validation, and test splits of 38, 12, and 10 sceneseach, following the splits in the original HSSD paper [19].Objects and Receptacles. We aggregate objects from AI2-Thor [ 59], Amazon-Berkeley Objects [ 60], Google Scanned Ob-jects [ 61] and the HSSD [ 19] dataset to create a large and diversedataset of real-world robot problems. In total, we annotated 2,535 ob-jects from 129 total categories.We identified 21 different categoriesof receptacles which appear in the HSSD dataset [19].SC, SI SC, UI UC, UI TotalCats 85 64 44 129Insts 1,363 748 424 2,535Table 2: # of objects in the sim for each split of(S)een and (U)nseen (I)nstance and (C)ategory.We construct our final set of furniture receptacleobjects by first automatically labeling stable areason top of receptacles, then manually refining andprocessing these in order to remove invalid or in-accessible receptacles. In addition, collision proxymeshes were automatically generated and in manycases manually corrected to support physically ac-curate procedural placement of object arrangements.4All 200+ scenes with rearrangement support will be released soon.5Figure 4: HomeRobot is a simple, easy-to-set-up library that works in multiple environments andrequires only relatively affordable hardware. Computationally intensive operations are performed ona desktop PC with a GPU, and a dedicated consumer-grade router provides a network interface to arobot running low-level control and SLAM.Episode Generation. We generate episodes consisting of varying object arrangements andparticular values for object ,start_receptacle , and goal_receptacle , which allow our agentto successfully move about and interact with the world. In the case of Open-V ocabulary MobileManipulation, this task is particularly challenging because we have to place objects in locations thatarenavigable , meaning that the robot can get to them, reachable , meaning its arm can make it tothese locations, and from which we can navigate to a navigable ,reachable goal receptacle. For fullepisode generation details see App. D.2.Training and Validation Split. Training episodes consist of objects from the large pool of seeninstances of seen categories (SC,SI) . In contrast, we use unseen instances of seen object categories(SC,UI) and unseen instances of unseen categories (UC,UI) for validation and test episodes. Two-thirds of the categories were randomly designated as seen, and two-thirds of the objects in the seencategory were randomly marked as seen instances. Splits are in Table 2 and the distribution of objectsacross categories is in App. Fig. 6.3.2 Real World BenchmarkReal-world experiments are performed in a controlled 3-room apartment environment, with a sofa,kitchen table, counter with bar, and TV stand, among other features. We documented the positioningof various objects and the robot start position, in order to ensure reproducibility across trials. Imagesof various layouts of the test apartment are included in Fig. 2, and task execution is shown in Fig. 16.During real-world testing, we selected object instances that did not appear in simulation training,split between classes that did and did not appear. We used eight different categories: five seen ( Cup,Bowl ,Stuffed Toy ,Medicine Bottle , and Toy Animal ), and three unseen ( Rubik’s cube ,Toy Drill ,andLemon ). We performed 20experiments for each of our two different baselines and with sevendifferent receptacle classes: Cabinet ,Chair ,Couch ,Counter ,Sink,Stool ,Table .4 The HomeRobot LibraryTo facilitate research on these challenging problems, we open-source the HomeRobot library, whichimplements navigation and manipulation capabilities supporting Hello Robot’s Stretch [ 22]. In oursetup, it is assumed that users have access to a mobile manipulator and an NVIDIA GPU-poweredworkstation. The mobile manipulator runs the low-level controller and the localization module, whilethe desktop runs the high-level perception and planning stack(Fig. 4). The robot and desktop areconnected using an off-the-shelf router5. The key features of our stack include:5Our experiments used a NetGear Nighthawk router.6Transferability: Unified state and action spaces between simulation & real-world settings for eachtask, providing an easy way to control a robot with either high-level action spaces (e.g., pre-madegrasping policies) or low-level continuous joint control.Modularity: Perception and action components to support high-level states (e.g. semantic maps,segmented point clouds) and high-level actions (e.g. go to goal position, pick up target object).Baseline Agents: Policies that use these capabilities to provide basic functionality for OVMM.4.1 Baseline Agent ImplementationCrucially, we provide baselines and tools that enable researchers to effectively explore the Open-V ocabulary Mobile Manipulation task. We include two types of baselines in HomeRobot: a heuristicbaseline, using motion planning [ 2] and simple rules for manipulation; and a reinforcement learningbaseline. In addition, we have implemented example projects from several recently released papers,testing different capabilities such as object-goal navigation [ 1,2], skill learning [ 24], continuallearning [ 25], and image instance navigation [ 26]. We implement a high-level policy calledOVMMAgent which calls a sequence of skills one after the other. These skills are:FindObj/FindRec: Locate an object on astart_receptacle ; or find a goal_receptacle .Gaze: Move close enough to an object to grasp it, and orient head to get a good view of the object.The goal of the gaze action is to improve the success rate of grasping.Pick: Pick up the object . We provide a high-level action for this, since we do not simulate thegripper interaction in Habitat. However, our library is compatible with a range of learned graspingskills and supports learning policies for grasping.Place: Move to a location in the environment and place the object on top of the goal_receptacle .Specifically, OVMMAgent is a state-machine that calls FindObj ,Gaze ,Pick ,FindRec , and Place inthat order, where Pick is a grasping policy provided by the robot library in the real world. The otherskills are created using the approaches given below:Heuristic. We implement a version using only off-the-shelf learned models and heuristics, notingthat previous work in mobile manipulation has used these models to great effect (e.g. [ 62]). Here,DETIC [ 63] provides masks for an open-vocabulary set of objects as appropriate for each skill.Thestart_receptacle ,object ,goal_receptacle for each episode is given. Fig. 16 shows anexample of the heuristic navigation and place policy being executed in the real world (App. E).RL. We train the four skills in our modified version of Habitat [ 21] as policies which predict actionsgiven depth, ground truth semantic segmentation and priopreceptive sensors (i.e. joints, gripper state),using DDPPO [ 64]. While RGB is available in our simulation, our baseline policies do not directlyutilize it; instead, they rely on predicted segmentation from Detic [27] at test time.5 ResultsWe first evaluate the two baselines in our simulated benchmark, followed by evaluation in a real-world, held-out test apartment. These results highlight the significance of OVMM as a challengingnew benchmark, encompassing numerous essential challenges that arise when deploying robots inreal-world environments.We break down the results by sub-task in addition to reporting the overall performance in Tables 3and 4. The columns FindObj ,Pick andFindRec refer to the first 3 phases of the task mentioned inthe scoring section (Sec. 3), and succeeding in the final Place phase leads to a successful episode.Simulation. We evaluate the baselines on held-out scenes, with objects from unseen instances ofseen classes, and unseen instances of unseen classes, as described in Sec. 3.1. We show results withtwo different perception systems: Ground Truth segmentation, where we use the segmentationinput directly from the simulator, and DETIC segmentation [ 27], where the RGB images from thesimulator are passed through DETIC, an open-vocabulary object detector.7Simulation Results Skill Partial Success Rates OverallSuccess RatePartialSuccess MetricPerception Navigation Gaze Place FindObj Pick FindRecGround Truth Heuristic None Heuristic 54.1 48.5 31.5 5.1 34.8Heuristic RL RL 56.5 51.5 42.3 13.2 40.9RL None Heuristic 65.4 54.8 43.7 7.3 42.8RL RL RL 66.6 61.1 50.9 14.8 48.3DETIC [27] Heuristic None Heuristic 28.7 15.2 5.3 0.4 12.4Heuristic RL RL 29.4 13.2 5.8 0.5 12.2RL None Heuristic 21.9 11.5 6.0 0.6 10.0RL RL RL 21.7 10.2 6.2 0.4 9.6Table 3: Partial and overall success rate (SR) (in %) for different combinations of skills and perceptionsystems. The partial SR for each skill is dependent on the previous skill’s SR. The partial SR for theplace skill is the same as the overall SR. The partial success metric is calculated by averaging the 4partial SRs. One of the main causes of failures for our baseline systems was perception; ground-truthperception is notably better. Both RL and heuristic skills struggled to navigate tightly constrainedmulti-room environments and successfully place objects.We report results on HomeRobot OVMMin Table 3. RL policies outperformed heuristic methodsfor both navigation and placement tasks. However, all policies declined in performance whenusing DETIC instead of ground truth segmentation. Heuristic policies exhibited less degradationin performance compared to RL policies: when using DETIC, the heuristic FindObj policy evenoutperforms RL. We attribute this to the heuristic policy’s ability to incorporate noisy predictions byconstructing a 2D semantic map, which helps handle small objects that are prone to misclassification.Furthermore, using the learned gaze policy led to improved pick performance, except when used incombination with the Heuristic nav with DETIC perception. Example simulation trajectories can befound in Appendix Figure 18, and comparisons of seen versus unseen categories in Appendix G.2.Real World. Finally, we conducted a series of experiments in a real-world held-out apartment setting.We performed a total of 20episodes, utilizing a combination of seen and unseen object classes asour target objects. The results of these experiments are presented in Table 4. RL performed slightlybetter than the Heuristic baseline, successfully completing 1 extra episode and achieving a successrate of 20%. This difference primarily stemmed from the pick and place sub-tasks. In the pick task,the RL Gaze skill plays a crucial role in achieving better alignment between the agent and the targetobject, which leads to more successful grasping. Similarly, the RL place skill demonstrated moreprecision, ensuring that the object stayed closer to the surface of the receptacle.Real World FindObj Pick FindRec Overall SuccessHeuristic Only 70 35 30 15RL Only 70 45 30 20Table 4: Success Rate (in %) for heuristic and RLbaselines in the real world OVMM task. In bothcases, the grasping action is executed as describedin Sec. 4; but initial conditions of the robot suchas its position relative to the object or to otherobstacles may cause various failures.Both simulation and real-world results show thebaselines are promising, but insufficient, forOpen-V ocabulary Mobile Manipulation. DE-TIC [ 27] caused many failures due to misclas-sification, both in simulation and the real world.Further, RL navigation was on par or better thanheuristic policies in both sim and real. Althoughour RL place policy performed better in sim thanheuristic place, it needs further improvement inthe real world. Gaining the advantages of web-scale pretrained vision-language models like DETIC, but tuned to our agents may be crucial forimproving performance.6 Conclusions and Future WorkWe proposed a combined simulation and real-world benchmark to enable progress on the importantproblem of Open-V ocabulary Mobile Manipulation. We ran extensive experiments showing promisingsimulation and real-world results from two baselines: a heuristic baseline based on a state-of-the-artmotion planner [ 2] and a reinforcement learning baseline trained with DDPPO [ 64]. In the future, wehope to improve the complexity of the problem space, adding more complex natural language andmulti-step commands, and provide end-to-end baselines instead of modular policies.87 AcknowledgementsWe would like to thank Andrew Szot for help with Habitat policy training, Santhosh Kumar Ra-makrishnan with help on Stretch Object navigation in simulation and on the real robot, and EricUndersander for help with improving Habitat rendering. Priyam Parashar, Xiaohan Zhang, and JayVakil helped with testing on Stretch and real-world scene setup.We would also like to thank the whole Hello Robot team, but especially Binit Shah and Blaine Mat-ulevich for their help with the robots, and Aaron Edsinger and Charlie Kemp for helpful discussions.The Georgia Tech effort was supported in part by ONR YIP and ARO PECASE. The views andconclusions contained herein are those of the authors and should not be interpreted as necessarilyrepresenting the official policies or endorsements, either expressed or implied, of the U.S. Government,or any sponsor.References[1]D. Batra, A. Gokaslan, A. Kembhavi, O. Maksymets, R. Mottaghi, M. Savva, A. Toshev, andE. Wijmans. Objectnav revisited: On evaluation of embodied agents navigating to objects.arXiv , 2020.[2]T. Gervet, S. Chintala, D. Batra, J. Malik, and D. S. Chaplot. Navigating to objects in the realworld. arXiv , 2022.[3]T. Wisspeintner, T. Van Der Zant, L. Iocchi, and S. Schiffer. Robocup@ home: Scientificcompetition and benchmarking for domestic service robots. Interaction Studies , 2009.[4]J. Bohren, R. B. Rusu, E. G. Jones, E. Marder-Eppstein, C. Pantofaru, M. Wise, L. Mösenlech-ner, W. Meeussen, and S. Holzer. Towards autonomous robotic butlers: Lessons learned withthe pr2. In ICRA , 2011.[5]W. Burgard, A. B. Cremers, D. Fox, D. Hähnel, G. Lakemeyer, D. Schulz, W. Steiner, andS. Thrun. Experiences with an interactive museum tour-guide robot. Artificial intelligence ,1999.[6]M. Shridhar, J. Thomason, D. Gordon, Y . Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, andD. Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. InCVPR , 2020.[7]X. Puig, K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, and A. Torralba. Virtualhome: Simulatinghousehold activities via programs. In CVPR , 2018.[8]C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-Pérez.Integrated task and motion planning. Annual Review of Control, Robotics, and AutonomousSystems , 4:265–293, 2021.[9]B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler.Open-vocabulary queryable scene representations for real world planning. arXiv , 2022.[10] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, et al. Open-world object manipulation using pre-trained vision-languagemodels. arXiv , 2023.[11] N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam. Clip-fields: Weaklysupervised semantic fields for robotic memory. arXiv , 2022.[12] B. Bolte, A. Wang, J. Yang, M. Mukadam, M. Kalakrishnan, and C. Paxton. Usa-net: Unifiedsemantic and affordance representations for robot memory. arXiv , 2023.9[13] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-Pérez, and C. R. Garrett. Long-horizonmanipulation of unknown objects via task and motion planning with estimated affordances. InICRA , 2022.[14] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In ICML , 2021.[15] K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, et al. Conceptfusion: Open-set multimodal 3d mapping. arXiv , 2023.[16] J. Krantz, E. Wijmans, A. Majundar, D. Batra, and S. Lee. Beyond the nav-graph: Visionand language navigation in continuous environments. In European Conference on ComputerVision (ECCV) , 2020.[17] W. Liu, T. Hermans, S. Chernova, and C. Paxton. Structdiffusion: Object-centric diffusion forsemantic rearrangement of novel objects. arXiv , 2022.[18] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson,Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv , 2023.[19] M. Khanna*, Y . Mao*, H. Jiang, S. Haresh, B. Shacklett, D. Batra, A. Clegg, E. Undersander,A. X. Chang, and M. Savva. Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of3D Scene Scale and Realism Tradeoffs for ObjectGoal Navigation. arXiv preprint , 2023.[20] M. Savva, A. Kadian, O. Maksymets, Y . Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V . Koltun,J. Malik, D. Parikh, and D. Batra. Habitat: A Platform for Embodied AI Research. ICCV ,2019.[21] A. Szot, A. Clegg, E. Undersander, E. Wijmans, Y . Zhao, J. Turner, N. Maestre, M. Mukadam,D. S. Chaplot, O. Maksymets, et al. Habitat 2.0: Training home assistants to rearrange theirhabitat. In NeurIPS , 2021.[22] C. C. Kemp, A. Edsinger, H. M. Clever, and B. Matulevich. The design of stretch: A compact,lightweight mobile manipulator for indoor human environments. In ICRA , 2022.[23] S. Yenamandra, A. Ramachandran, M. Khanna, K. Yadav, D. S. Chaplot, G. Chhablani,A. Clegg, T. Gervet, V . Jain, R. Partsey, R. Ramrakhya, A. Szot, T.-Y . Yang, A. Edsinger,C. Kemp, B. Shah, Z. Kira, D. Batra, R. Mottaghi, Y . Bisk, , and C. Paxton. Homerobot openvocab mobile manipulation challenge 2023. https://aihabitat.org/challenge/2023_homerobot_ovmm/ , 2023.[24] P. Parashar, J. Vakil, S. Powers, and C. Paxton. Spatial-language attention policies for efficientrobot learning. arXiv , 2023.[25] S. Powers, A. Gupta, and C. Paxton. Evaluating continual learning on a home robot, 2023.[26] J. Krantz, T. Gervet, K. Yadav, A. Wang, C. Paxton, R. Mottaghi, D. Batra, J. Malik, S. Lee,and D. S. Chaplot. Navigating to objects specified by images. arXiv , 2023.[27] X. Zhou, R. Girdhar, A. Joulin, P. Krähenbühl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In ECCV , 2022.[28] L. Weihs, M. Deitke, A. Kembhavi, and R. Mottaghi. Visual room rearrangement. In CVPR ,2021.[29] K. Yadav, J. Krantz, R. Ramrakhya, S. K. Ramakrishnan, J. Yang, A. Wang, J. Turner,A. Gokaslan, V .-P. Berges, R. Mootaghi, O. Maksymets, A. X. Chang, M. Savva, A. Clegg, D. S.Chaplot, and D. Batra. Habitat challenge 2023. https://aihabitat.org/challenge/2023/ , 2023.10[30] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius,A. Bhandwaldar, N. Haber, et al. Threedworld: A platform for interactive multi-modal physicalsimulation. NeurIPS Datasets and Benchmarks Track , 2021.[31] X. Puig, K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, and A. Torralba. Virtualhome: Simulatinghousehold activities via programs. In CVPR , 2018.[32] M. Deitke, E. VanderBilt, A. Herrasti, L. Weihs, J. Salvador, K. Ehsani, W. Han, E. Kolve,A. Farhadi, A. Kembhavi, and R. Mottaghi. Procthor: Large-scale embodied ai using proceduralgeneration. In NeurIPS , 2022.[33] M. Deitke, W. Han, A. Herrasti, A. Kembhavi, E. Kolve, R. Mottaghi, J. Salvador, D. Schwenk,E. VanderBilt, M. Wallingford, L. Weihs, M. Yatskar, and A. Farhadi. RoboTHOR: An OpenSimulation-to-Real Embodied AI Platform. In CVPR , 2020.[34] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. Martín-Martín, C. Wang, G. Levine,M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everydayactivities and realistic simulation. In CoRL , 2023.[35] T. Mu, Z. Ling, F. Xiang, D. C. Yang, X. Li, S. Tao, Z. Huang, Z. Jia, and H. Su. Maniskill:Generalizable manipulation skill benchmark with large-scale demonstrations. In NeurIPSDatasets and Benchmarks Track , 2021.[36] E. Krotkov, D. Hackett, L. Jackel, M. Perschbacher, J. Pippine, J. Strauss, G. Pratt, andC. Orlowski. The darpa robotics challenge finals: Results and perspectives. The DARPARobotics Challenge Finals: Humanoid Robots To The Rescue , 2018.[37] G. Seetharaman, A. Lakhotia, and E. P. Blasch. Unmanned vehicles come of age: The darpagrand challenge. Computer , 2006.[38] M. Buehler, K. Iagnemma, and S. Singh. The DARPA urban challenge: autonomous vehiclesin city traffic . Springer Berlin, Heidelberg, 2009.[39] N. Correll, K. E. Bekris, D. Berenson, O. Brock, A. Causo, K. Hauser, K. Okada, A. Rodriguez,J. M. Romano, and P. R. Wurman. Analysis and observations from the first amazon pickingchallenge. IEEE Transactions on Automation Science and Engineering , 2016.[40] L. D. Jackel, E. Krotkov, M. Perschbacher, J. Pippine, and C. Sullivan. The darpa lagr program:Goals, challenges, methodology, and phase i results. Journal of Field Robotics , 2006.[41] M. Müller and V . Koltun. Openbot: Turning smartphones into robots. In ICRA , 2021.[42] N. Kau, A. Schultz, N. Ferrante, and P. Slade. Stanford doggo: An open-source, quasi-direct-drive quadruped. In ICRA , 2019.[43] F. Grimminger, A. Meduri, M. Khadiv, J. Viereck, M. Wüthrich, M. Naveau, V . Berenz,S. Heim, F. Widmaier, J. Fiene, A. Badri-Spröwitz, and L. Righetti. An open torque-controlledmodular robot architecture for legged locomotion research. IEEE Robotics and AutomationLetters , 2019.[44] B. Yang, J. Zhang, V . H. Pong, S. Levine, and D. Jayaraman. Replab: A reproducible low-costarm benchmark platform for robotic learning. arXiv , 2019.[45] D. V . Gealy, S. McKinley, B. Yi, P. Wu, P. R. Downey, G. Balke, A. Zhao, M. Guo, R. Thomas-son, A. Sinclair, P. Cuellar, Z. McCarthy, and P. Abbeel. Quasi-direct drive for low-costcompliant robotic manipulation. In ICRA , 2019.[46] M. Wüthrich, F. Widmaier, F. Grimminger, S. Joshi, V . Agrawal, B. Hammoud, M. Khadiv,M. Bogdanovic, V . Berenz, J. Viereck, M. Naveau, L. Righetti, B. Schölkopf, and S. Bauer.Trifinger: An open-source robot for learning dexterity. In CoRL , 2020.11[47] M. Ahn, H. Zhu, K. Hartikainen, H. Ponte, A. Gupta, S. Levine, and V . Kumar. ROBEL:RObotics BEnchmarks for Learning with low-cost robots. In CoRL , 2019.[48] A. Murali, T. Chen, K. V . Alwala, D. Gandhi, L. Pinto, S. Gupta, and A. K. Gupta. Pyrobot:An open-source robotics framework for research and benchmarking. arXiv , 2019.[49] L. Paull, J. Tani, H. Ahn, J. Alonso-Mora, L. Carlone, M. Cáp, Y . F. Chen, C. Choi, J. Dusek,Y . Fang, D. Hoehener, S. Liu, M. M. Novitzky, I. F. Okuyama, J. Pazis, G. Rosman, V . Varric-chio, H.-C. Wang, D. S. Yershov, H. Zhao, M. R. Benjamin, C. Carr, M. T. Zuber, S. Karaman,E. Frazzoli, D. D. Vecchio, D. Rus, J. P. How, J. J. Leonard, and A. Censi. Duckietown: Anopen, inexpensive and flexible platform for autonomy education and research. In ICRA , 2017.[50] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar. The ycb object andmodel set: Towards common benchmarks for manipulation research. In ICRA , 2015.[51] D. Morrison, P. Corke, and J. Leitner. Egad! an evolved grasping analysis dataset for diversityand reproducibility in robotic manipulation. IEEE Robotics and Automation Letters , 2020.[52] B. Yang, P. E. Lancaster, S. S. Srinivasa, and J. R. Smith. Benchmarking robot manipulationwith the rubik’s cube. IEEE Robotics and Automation Letters , 2020.[53] A. Murali, T. Chen, K. V . Alwala, D. Gandhi, L. Pinto, S. Gupta, and A. Gupta. Pyrobot: Anopen-source robotics framework for research and benchmarking. arXiv , 2019.[54] S. Dasari, J. Wang, J. Hong, S. Bahl, Y . Lin, A. S. Wang, A. Thankaraj, K. S. Chahal, B. Çalli,S. Gupta, D. Held, L. Pinto, D. Pathak, V . Kumar, and A. Gupta. Rb2: Robotic manipulationbenchmarking with a twist. arXiv , 2022.[55] G. Zhou, V . Dean, M. K. Srirama, A. Rajeswaran, J. Pari, K. B. Hatch, A. Jain, T. Yu, P. Abbeel,L. Pinto, C. Finn, and A. Gupta. Train offline, test online: A real robot learning benchmark.arXiv , 2022.[56] K. Kimble, K. Van Wyk, J. Falco, E. Messina, Y . Sun, M. Shibata, W. Uemura, and Y . Yokoko-hji. Benchmarking protocols for evaluating small parts robotic assembly systems. IEEERobotics and Automation Letters , 2020.[57] W. Lian, T. Kelch, D. Holz, A. Norton, and S. Schaal. Benchmarking off-the-shelf solutions torobotic assembly tasks. In IROS , 2021.[58] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, M. Savva, S. Chernova, andD. Batra. Are we making real progress in simulated environments? measuring the sim2realgap in embodied visual navigation. arXiv , 2019.[59] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, D. Gordon, Y . Zhu,A. Gupta, and A. Farhadi. AI2-THOR: an interactive 3d environment for visual AI. arXiv ,2017.[60] J. Collins, S. Goel, A. Luthra, L. Xu, K. Deng, X. Zhang, T. F. Y . Vicente, H. Arora, T. Diderik-sen, M. Guillaumin, et al. Abo: Dataset and benchmarks for real-world 3d object understanding.InCVPR , 2022.[61] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh, andV . Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned household items.InICRA , 2022.[62] J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, andT. Funkhouser. Tidybot: Personalized robot assistance with large language models. arXiv ,2023.12[63] T. Zhou, R. Tucker, J. Flynn, G. Fyffe, and N. Snavely. Stereo magnification: Learning viewsynthesis using multiplane images. SIGGRAPH , 2018.[64] E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra. Dd-ppo:Learning near-perfect pointgoal navigators from 2.5 billion frames. In ICLR , 2019.[65] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scalehierarchical image database. In CVPR , 2009.[66] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. GLUE: A multi-taskbenchmark and analysis platform for natural language understanding. In ICLR , 2019.[67] Y . Bisk, R. Zellers, R. L. Bras, J. Gao, and Y . Choi. Piqa: Reasoning about physical common-sense in natural language. In AAAI , 2020.[68] M. Sap, H. Rashkin, D. Chen, R. LeBras, and Y . Choi. SocialIQA: Commonsense reasoningabout social interactions. In EMNLP , 2019.[69] R. Zellers, A. Holtzman, Y . Bisk, A. Farhadi, and Y . Choi. Hellaswag: Can a machine reallyfinish your sentence? In Proceedings of the 57th Annual Meeting of the Association forComputational Linguistics , 2019.[70] S. Keisuke, L. B. Ronan, B. Chandra, and C. Yejin. Winogrande: An adversarial winogradschema challenge at scale. In AAAI , 2019.[71] T.-Y . Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick.Microsoft coco: Common objects in context. In ECCV , 2014.[72] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machinecomprehension of text. In EMNLP , 2016.[73] L. Pinto and A. Gupta. Supersizing self-supervision: Learning to grasp from 50k tries and 700robot hours. In ICRA , 2016.[74] S. Levine, P. Pastor, A. Krizhevsky, J. Ibarz, and D. Quillen. Learning hand-eye coordinationfor robotic grasping with deep learning and large-scale data collection. IJRR , 2018.[75] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,M. Yan, and A. Zeng. Do as i can and not as i say: Grounding language in robotic affordances.InCoRL , 2022.[76] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, S. Savarese, and L. Fei-Fei. Roboturk: A crowdsourcing platform for robotic skilllearning through imitation. In CoRL , 2018.[77] P. Sharma, L. Mohan, L. Pinto, and A. K. Gupta. Multiple interactions made easy (mime):Large scale demonstrations data for imitation. In CoRL , 2018.[78] J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg.Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic graspmetrics. arXiv , 2017.[79] S. Dasari, F. Ebert, S. Tian, S. Nair, B. Bucher, K. Schmeckpeper, S. Singh, S. Levine, andC. Finn. Robonet: Large-scale multi-robot learning. arXiv , 2019.13[80] A. Gupta, A. Murali, D. P. Gandhi, and L. Pinto. Robot learning in homes: Improvinggeneralization and reducing dataset bias. In NeurIPS , 2018.[81] P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sünderhauf, I. D. Reid, S. Gould, andA. van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigationinstructions in real environments. In CVPR , 2017.[82] H. Team. Habitat CVPR challenge, 2019. URL https://aihabitat.org/challenge/2019/ .[83] F. Xia, W. B. Shen, C. Li, P. Kasimbeg, M. Tchapmi, A. Toshev, R. Martín-Martín, andS. Savarese. Interactive gibson benchmark: A benchmark for interactive navigation in clutteredenvironments. IEEE Robotics and Automation Letters , 2020.[84] C. Chen, U. Jain, C. Schissler, S. V . A. Gari, Z. Al-Halah, V . K. Ithapu, P. Robinson, andK. Grauman. Soundspaces: Audio-visual navigation in 3d environments. In ECCV , 2020.[85] K. Ehsani, W. Han, A. Herrasti, E. VanderBilt, L. Weihs, E. Kolve, A. Kembhavi, andR. Mottaghi. Manipulathor: A framework for visual object manipulation. In CVPR , 2021.[86] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In CoRL , 2019.[87] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 2020.[88] A. Ku, P. Anderson, R. Patel, E. Ie, and J. Baldridge. Room-across-room: Multilingualvision-and-language navigation with dense spatiotemporal grounding. In EMNLP , 2020.[89] A. Padmakumar, J. Thomason, A. Shrivastava, P. Lange, A. Narayan-Chen, S. Gella, R. Pi-ramithu, G. Tur, and D. Hakkani-Tur. TEACh: Task-driven embodied agents that chat. InAAAI , 2022.[90] X. Gao, Q. Gao, R. Gong, K. Lin, G. Thattai, and G. S. Sukhatme. Dialfred: Dialogue-enabledagents for embodied instruction following. IEEE Robotics and Automation Letters , 2022.[91] A. Szot, K. Yadav, A. Clegg, V .-P. Berges, A. Gokaslan, A. Chang, M. Savva, Z. Kira, andD. Batra. Habitat rearrangement challenge. https://aihabitat.org/challenge/2022_rearrange , 2022.[92] C. M. Kim, M. Danielczuk, I. Huang, and K. Goldberg. Simulation of parallel-jaw graspingusing incremental potential contact models. In ICRA , 2022.[93] D. Hall, B. Talbot, and N. Sünderhauf. The robotic vision challenges. https://nikosuenderhauf.github.io/roboticvisionchallenges/cvpr2022 , 2022.[94] A. Kurenkov, R. Martín-Martín, J. Ichnowski, K. Goldberg, and S. Savarese. Semanticand geometric modeling with neural message passing in 3d scene graphs for hierarchicalmechanical search. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 11227–11233. IEEE, 2021.[95] K. Rana, J. Haviland, S. Garg, J. Abou-Chakra, I. Reid, and N. Suenderhauf. Sayplan:Grounding large language models using 3d scene graphs for scalable task planning. arXivpreprint arXiv:2307.06135 , 2023.[96] J. Chen, G. Li, S. Kumar, B. Ghanem, and F. Yu. How to not train your dragon: Training-freeembodied object goal navigation with semantic frontiers. arXiv preprint arXiv:2305.16925 ,2023.14[97] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In ICRA , 2021.[98] A. Murali, A. Mousavian, C. Eppner, C. Paxton, and D. Fox. 6-dof grasping for target-drivenobject manipulation in clutter. In ICRA , 2020.[99] H.-S. Fang, C. Wang, M. Gou, and C. Lu. Graspnet-1billion: A large-scale benchmark forgeneral object grasping. In CVPR , 2020.[100] C. R. Garrett, C. Paxton, T. Lozano-Pérez, L. P. Kaelbling, and D. Fox. Online replanning inbelief space for partially observable task and motion problems. In ICRA , 2020.[101] A. Mousavian, C. Eppner, and D. Fox. 6-dof graspnet: Variational grasp generation for objectmanipulation. In ICCV , 2019.[102] C. Paxton, C. Xie, T. Hermans, and D. Fox. Predicting stable configurations for semanticplacement of novel objects. In CoRL , 2022.[103] S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf. A flexible and scalable slam systemwith full 3d motion estimation. In Proc. IEEE International Symposium on Safety, Securityand Rescue Robotics (SSRR) . IEEE, November 2011.[104] J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In ICRA , 2000.[105] D. S. Chaplot, D. P. Gandhi, A. Gupta, and R. R. Salakhutdinov. Object goal navigation usinggoal-oriented semantic exploration. In NeurIPS , 2020.[106] B. Yamauchi. A frontier-based approach for autonomous exploration. In IEEE InternationalSymposium on Computational Intelligence in Robotics and Automation , 1997.[107] J. A. Sethian. Fast marching methods. SIAM review , 1999.[108] D. S. Chaplot, S. Gupta, A. Gupta, and R. Salakhutdinov. Learning to explore using activeneural mapping. ICLR , 2020.[109] M. Caron, H. Touvron, I. Misra, H. Jégou, J. Mairal, P. Bojanowski, and A. Joulin. Emergingproperties in self-supervised vision transformers. CVPR , 2021.[110] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643 , 2023.[111] K. M. Jatavallabhula, S. Saryazdi, G. Iyer, and L. Paull. gradslam: Automagically differentiableslam. arXiv preprint arXiv:1910.10672 , 2019.[112] Q. Gu, A. Kuwajerwala, S. Morin, K. M. Jatavallabhula, B. Sen, A. Agarwal, C. Rivera,W. Paul, K. Ellis, R. Chellappa, et al. Conceptgraphs: Open-vocabulary 3d scene graphs forperception and planning. arXiv preprint arXiv:2309.16650 , 2023.[113] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y . Ng, et al. Ros:an open-source robot operating system. In ICRA Workshop on Open Source Software , 2009.15AppendixTable of ContentsA Extended Related Work 16B Limitations 17C Metrics 18C.1 Simulation Success Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18C.2 Real World Success Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18D Simulation Details 19D.1 Object Categories Appearing in the Scene Dataset . . . . . . . . . . . . . . . . 19D.2 Episode Generation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21D.3 Diversity in Receptacle Instances . . . . . . . . . . . . . . . . . . . . . . . . . 22D.4 Scene Clutter Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22D.5 Improved scene visuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23D.6 Action Space Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 24E HomeRobot Implementation Details 24E.1 Pose Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25E.2 Low-Level Control for Navigation . . . . . . . . . . . . . . . . . . . . . . . . . 25E.3 Heuristic Grasping Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25E.4 Heuristic Placement Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26E.5 Navigation Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27E.6 Navigation Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28F Reinforcement Learning Baseline 28F.1 Action Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29F.2 Observation Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29F.3 Training Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29F.4 ConceptFusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31G Additional Analysis 31G.1 Number of steps taken in each stage by different baselines . . . . . . . . . . . . 32G.2 Performance on Seen vs. Unseen Object Categories . . . . . . . . . . . . . . . 33H Hardware Setup 34H.1 Hardware Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34H.2 Robot Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35H.3 Visualizing The Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37H.4 Using The Stretch: Navigation vs. Position Mode . . . . . . . . . . . . . . . . . 37A Extended Related WorkIt is difficult to do justice to the rich embodied AI, natural language, computer vision, machinelearning, and robotics communities that have addressed aspects of the work presented here. Thefollowing extends some of the discussion from the main manuscript about important advances thatthe community has made.16Benchmarks have helped the community focus their efforts and fairly compare system performance.For example, the YCB objects [ 50] allowed for direct comparison of results across manipulators andmodels. While benchmarks and leaderboards are comparatively rare in robotics [ 49,56,33,54,63,3,39], they have been hugely influential in machine learning (e.g. ImageNet [ 65], GLUE [ 66], andvarious language benchmarks [ 67–70], COCO [ 71], and SQuAD [ 72]). In robotics, competitionssuch as RoboCup@Home [ 3], the Amazon Picking Challenge [ 39], and the NIST task board [ 56] areprevalent and influential as an alternative, but generally systems aren’t reproducible across teams.Datasets. In addition to the environments referenced in Table 1, offline datasets including robotinteractions with scenes have been used widely to train models. These datasets are typically obtainedusing robots alone (e.g., [ 73,74]), by teleoperation (e.g., [ 75,76]) or human-robot demonstration(e.g., [ 77]). Previous work such as [ 78] aim to collect large-scale datasets while works such as [ 79]consider scaling across multiple embodiments. [ 80] take a step further by collecting robot datain unstructured environments. Unlike these works, we do not limit our users to a specific dataset.Instead, we provide a simulator with various scenes that can generate large-scale consistent data fortraining. Also, note that we test the models in unseen environments, while most of the mentionedworks use the same environment for training and testing.Simulation benchmarks. The embodied AI community has provided various benchmarks insimulation platforms for tasks such as navigation [ 1,81–84], object manipulation [ 85,35,86,87],instruction following [ 6,88–90], room rearrangement [ 28,91], grasping [ 92] and SLAM [ 93].While these benchmarks ensure reproducibility and fair comparison of different methods, there isalways a gap between simulation and reality since it is infeasible to model all details of the realworld in simulation. Our benchmark, in contrast, enables fair comparison of different methods andreproducibility of the results in the real world. Additionally, previous benchmarks often operate in asimplified discrete action space [20, 6], even forcing that structure on the real world [2].Robotics benchmarking. Robotics benchmarks must contend with the diversity of hardware,morphology, and resources across labs. One solution is simulation [ 87,59,35,20,21,83,86,6],which can provide reproducible and fair evaluations. However, the sim-to-real gap means simulationresults may not be indicative of progress in the real world [ 2]. Another proposed solution is roboticcompetitions such as RoboCup@Home [ 3], the Amazon Picking Challenge [ 39], and the NIST taskboard [ 56]. However, participants typically use their own hardware, making it difficult to conduct faircomparisons of the different underlying methods, and means results are not transferable to differentlabs or settings. This is also a large barrier to entry to these competitions.Exploration of unseen environments. Various papers have looked at object search in different homeenvironments. Gervet et al. [ 2] use a mixture of heuristic/model-based planning and reinforcementlearning to achieve strong results in a variety of real-world environments; importantly their planning-based methods perform competititvely to the best learned methods, and much better than end-to-endreinforcement learning on real tasks. Also promising is graph-based exploration. Kurenkov et al. [ 94]propose hierarchical mechanical search , based on a 3d scene graph representation, for exploringenvironments; although unlike in our Open-V ocabulary Mobile Manipulation task, they assume such ascene graph exists. Similarly, SayPlan [ 95] performs search over a compex scene graph using a largelanguage model; however, this approach also does not look into iteratively constructing this scenegraph on new scenes. While there is active work in iteratively exploring and building scene graphsand other hierarchical representations (e.g. [ 96]), there are not yet strongly established methods inthis space.B LimitationsDue to simulation limitations, we don’t physically simulate grasping in the current version of thebenchmark, which is why we provide a separate policy for this in the real world. Grasping is awell-studied problem [ 97–99], but simulations that train useful real-world grasp systems requirespecial consideration. We also currently consider full natural language queries out-of-scope. Finally,17we did not evaluate many motion planners (see Sec. E.2), or performed task-and-motion-planningwith replanning, as would be ideal for a long horizon task [100].C MetricsWe informally defined our scoring metrics in Sec. 3. Here, we provide formal definitions of ourpartial success metrics.C.1 Simulation Success MetricsSuccess in simulation is defined per stage as:•FindObj: Successful if the agent reaches within 0.1m of a viewpoint of the target object onstart_receptacle , and at least 0.1%of the pixels in its camera frame belong to an objectinstance.•Pick: Successful if FindObj succeeded, the agent enables the gripper at an instant where anobject instance is visible and its end-effector reaches within 0.8m of a target object. We magicallysnap the object to the agent’s gripper in simulation.•FindRec: Successful if Pick succeeded, and the agent reaches within 0.1m of a viewpoint of agoal_receptacle , and at least 0.1%of the pixels in its camera frame belong to the object containinga valid receptacle.•Place: Successful if FindRec succeeded, the agent releases the object and subsequently theobject stays in contact with the goal_receptacle with linear and angular velocities below athreshold of 5e−3m/s and 5e−2rad/s respectively for 50 contiguous steps. Further, the agent shouldnot collide with the scene while attempting to place the object.An episode is considered to have succeeded if it succeeds in all 4 stages within 1250 steps.Better pick success condition. We plan to use a more realistic grasping condition in simulation. Wetry replacing the magic snap in simulation with a stricter condition that requires the agent to moveits arm near the object without colliding with the scene or other objects. Additionally, we tested abaseline (Figure 5) that performs top-down grasps resembling our real-world grasping policy, andresorting to side-ways grasps when the object is farther. While this baseline succeeds in reaching theobject starting from an object viewpoint 79% of the time, it does so without colliding only 47% ofthe time.C.2 Real World Success MetricsSuccess in real world is defined per stage as:•FindObj: Successful if the agent reaches within 1m of the target object onstart_receptacleand the object is visible in the RGB image from the camera.•Pick: Successful if FindObj succeeded and the agent successfully picks up the object from thestart_receptacle .•FindRec: Successful if Pick succeeded, and the agent reaches within 1m of a goal_receptacle ,and the goal_receptacle is visible in the RGB image from the camera.•Place: Successful if FindRec succeeded and the agent places object on a goal_receptacleand the object settles down on the goal_receptacle stably.Given that the scene we use in the real world is much smaller than the apartments in the simulation,we allow the agent to act in the environment for 300 timesteps. The episode is considered to havesucceeded if it succeeds in all 4 stages.18Figure 5: A few success and failure cases for our simple grasping policy under the new grasp successcondition that requires the agent’s arm to reach near the object without colliding. The agent resorts tosideways grasps when the object can’t be reached via top-down grasp that bends the gripper. Mostgrasping failures are because of the collisions with the scene.D Simulation DetailsD.1 Object Categories Appearing in the Scene Datasetaction_figure, android_figure, apple, backpack, baseballbat, basket, basketball,bath_towel, battery_charger, board_game, book, bottle, bowl, box, bread, bundt_pan,butter_dish, c-clamp, cake_pan, can, can_opener, candle, candle_holder, candy_bar,canister, carrying_case, casserole, cellphone, clock, cloth, credit_card, cup,cushion, dish, doll, dumbbell, egg, electric_kettle, electronic_cable, file_sorter,folder, fork, gaming_console, glass, hammer, hand_towel, handbag, hard_drive, hat,helmet, jar, jug, kettle, keychain, knife, ladle, lamp, laptop, laptop_cover,laptop_stand, lettuce, lunch_box, milk_frother_cup, monitor_stand, mouse_pad,multiport_hub, newspaper, pan, pen, pencil_case, phone_stand, picture_frame,pitcher, plant_container, plant_saucer, plate, plunger, pot, potato, ramekin,remote, salt_and_pepper_shaker, scissors, screwdriver, shoe, soap, soap_dish,soap_dispenser, spatula, spectacles, spicemill, sponge, spoon, spray_bottle,squeezer, statue, stuffed_toy, sushi_mat, tape, teapot, tennis_racquet,tissue_box, toiletry, tomato, toy_airplane, toy_animal, toy_bee, toy_cactus,toy_construction_set, toy_fire_truck, toy_food, toy_fruits, toy_lamp, toy_pineapple,toy_rattle, toy_refrigerator, toy_sink, toy_sofa, toy_swing, toy_table, toy_vehicle,tray, utensil_holder_cup, vase, video_game_cartridge, watch, watering_can,wine_bottleIn Fig. 7 we show some of the examples of a selection of these categories from the training andvalidation/test splits.19101102cushionboxpicture framecupbookbowlclockshoeglassbottlepantraystuffed toysoap dispensertomatopotatolaptopeggbreadapplestatuedumbbellcandlemultiport hubcasseroletoy animaljugaction figureteapothand toweltoy vehiclespray bottleplant containerknifecellphonetapespicemilltennis racquetkettleelectric kettlepenlaptop standgaming consolecredit cardboard gamebaseballbatSeen CategoriesSeen CategorySeen IntancesUnseen IntancesSingle seen instance categories:watch, toy swing, toy pineapple,toy food, toy fire truck, toy cactus,sushi mat, stuffed toy, spoon, spec-tacles, spatula, soap dish, screw-driver, scissors, ramekin, pitcher,mouse pad, monitor stand, milkfrother cup, lunch box, laptopcover, lamp, ladle, keychain, hat,handbag, hammer, fork, folder,file sorter, carrying case, candybar, candle holder, cake pan, c-clamp, butter dish, bundt pan, bathtowel, basketball101102vaseplatepotlettucetoiletryplant saucerclothsalt and pepper shakerpencil casehard drivenewspapercanisterplungerandroid figureremotecanbasketwine bottletoy rattletoy airplanespongephone standjarUnseen CategoriesSingle instance categories: wa-tering can, video game cartridge,utensil holder cup, toy table, toysofa, toy sink, toy refrigerator, toylamp, toy fruits, toy constructionset, toy bee, tissue box, squeezer,soap, helmet, electronic cable,doll, dish, can opener, batterycharger, backpackFigure 6: Number of objects across different splits, for both seen categories and unseen categories.We divide objects between categories which appear in training data – seen categories – and those thatdo not – unseen categories . The goal of Open-V ocabulary Mobile Manipulation is to be able to findand manipulate objects specified by language.cushioncuppanvaseplateplant saucerFigure 7: Example objects in our object dataset across 6 categories. The cushion, cup, and pancategories are in the train split, and the vase, plate, and plant saucer are in the validation and test sets.20D.2 Episode Generation DetailsFigure 8: Visualization of the navigable geometry (top row) and top-down views of example scenesfrom the Habitat Synthetic Scenes Dataset (HSSD) [ 19]. We use the computed navigable area toefficiently generate a large number of episodes for the Open-V ocabulary Mobile Manipulation task.Object placement positions are sampled to be near navigable areas of the map, atop one of a largevariety of different receptacles, such that the robot can reach them.When generating episodes, we find the largest indoor navigable area in each scene, and then filterthe list of all receptacles from this scene that are too small for object placement. Fig. 8 shows thenavigable islands in several of our scenes (top row), and corresponding top-down views of each scenein the bottom row. We then sample objects according to the current split (train, validation, or test).We run physics to ensure that objects are placed in stable locations. Then we select objects randomlyfrom the appropriate set, as determined by the current split.Figure 9: First-person view from different precomputed viewpoints in our episode dataset. Theseviewpoints are used as goals for training navigation skills, and are used in the initialization of theplacement and gaze/grasping skills as well. The purple mesh indicates receptacle surface.Finally, we generate a set of candidate viewpoints, shown in Fig. 9, which represent navigablelocations to which the robot can move for each receptacle. These are used for training specific skills,such as navigation to receptacles. Each viewpoint corresponds to a particular start_receptacleorgoal_receptacle , and represents a nearby location where the robot can see the receptacle and iswithin 1.5meters. Fig. 10 gives examples of where these viewpoints are created.Navmesh: We precompute a navigable scene geometry as done in [ 20] for faster collision checks ofthe agent with the scene. The “mesh” comprising this navigable geometry is referred to as a navmesh.21Figure 10: Viewpoints created for an object during episode generation. The gray area is the navigableregion of the scene. The big red dot and the black box are the object’s center and bounding boxrespectively. The surrounding dots are viewpoint candidates: red dots were rejected because theyweren’t navigable, and blue dots were rejected because they were too far from the object. The greendots are the final set of viewpoints.Figure 11: The variation in instances belonging to the "table" category in our dataset.Number of objects: This is dynamically set per scene to 1.5-2 ×the total available receptacle area inm2. For example, if the total receptacle surface area for a scene is 10m2, then 15-20 objects will beplaced. The exact number of objects will be randomly selected per episode to be in this range.The full set of included receptacles in simulation is: bathtub, bed, bench, cabinet, chair,chest_of_drawers, couch, counter, filing_cabinet, hamper, serving cart,shelves, shoe_rack, sink, stand, stool, table, toilet, trunk, wardrobe, &washer_dryer .D.3 Diversity in Receptacle InstancesThe instances within each receptacle category exhibit substantial variability. Figure 11 shows a fewdifferent receptacles from our dataset belonging to the "table" category.D.4 Scene Clutter ComplexityOur procedural placement of target and distractor objects creates diverse and interesting scenariosthat require reasoning over which direction to approach receptacles, stable placement in clutter, openvocabulary object detection under occlusion which makes the task quite challenging. Figure 12 showsa few examples of clutter surrounding target objects in our scenes.22Figure 12: A few examples of clutter surrounding the target object in our simulation settings.Figure 13: Here we present the improvements in scene visuals with Horizon-based AmbientOcclusion (HBAO) and expanded Physics-based Rendering (PBR) material support added to theHabitat renderer. The top row shows images from the default renderer whereas the bottom row showsthe improved renderings.D.5 Improved scene visualsWe rewrote and expanded the existing Physically-Based Rendering shader (PBR) and added Horizon-based Ambient Occlusion (HBAO) to the Habitat renderer, which led to notable improvements inviewing quality which were necessary for using the HSSD [19] dataset.• Rewrote PBR and Image Based Lighting (IBL) base calculations.•Added multi-layer material support covering KHR_materials_clearcoat ,KHR_materials_specular ,KHR_materials_ior , and KHR_materials_anisotropyfor both direct and indirect (IBL) lighting.• Added tangent frame synthesis if precomputed tangents are not provided.• Added HDR Environment map support for IBL.We present comparisons between default Habitat visuals and improved renderings in Figure 13.We also benchmark the ObjectNav training speeds of a DDPPO-based RL agent with and without theimproved rendering and present the results in 14. We see that the improvement in scene lighting and23Figure 14: Minor drop in FPS with improved scene rendering : Here, we benchmark the trainingspeeds (through FPS numbers) of two ObjectNav training runs with and without the HBAO andPBR-based improved scene visuals. We observe that the improved rendering leads to a very smalldrop in FPS from around 340 to 330 (3 % drop).rendering comes at the cost of only a 3% dip in training FPS (decreasing from around 340 to around330).D.6 Action Space ImplementationWe look at two different choices of action space for our navigation agents, either making discrete orcontinuous predictions about where to move next. Our expectation from prior work might be that thediscrete action space would be notably easier for agents to work with.Discrete. Previous benchmarks often operate in a fully discrete action space [ 20,6], even in the realworld [ 2]. We implement a set of discrete actions, with fixed in-place rotation left and right, andtranslation of steps 0.25mforward.Continuous. Our continuous action space is implemented as a teleporting agent, where the robotneeds to move around by predicting a local waypoint. Our robot’s low-level controllers are expectedto be able to get the robot to this location, in lieu of simulating full physics for the agent.In simulation, this is implemented as a check against the navmesh - we use the navmesh to determineif the robot will go into collision with any objects if moved towards the new location, and move it tothe closest valid location instead.E HomeRobot Implementation DetailsHere, we describe more specifics for how we implemented the heuristic policies provided as a baselineto accelerate home assistant robot research.Although there exists a considerable body of prior research looking at learning specific grasping [ 101,98,99,97] or placement [ 102,17] skills, we found that it was easiest to implement heuristic policieswith low CPU/GPU requirements and high interpretability. Other recent works have similarly usedheuristic grasping and placement policies to great affect (e.g. TidyBot [62]).There are three different repositories within the open-source HomeRobot library:•home_robot : Shared components such as Environment interfaces, controllers, detectionand segmentation modules.•home_robot_sim : Simulation stack with Environments based on Habitat.•home_robot_hw : Hardware stack with server processes that runs on the robot, client APIthat runs on the GPU workstation, and Environments built using the client API.24Most policies are implemented in the core home_robot library. Within HomeRobot, we also dividefunctionality between Agents andEnvironments , similar to how many reinforcement learningbenchmarks are set up [20].•Agents contain all of the necessary code to execute policies. We implement agents which usea mixture of heuristic policies and policies learned on our scene dataset via reinforcementlearning.•Environments provide common logic; they provide Observations to the Agent, and afunction which allows them to apply their action to the (real or simulated) environment.E.1 Pose InformationWe get the global robot pose from Hector SLAM [ 103] on the Hello Robot Stretch [ 22], which isused when creating 2d semantic maps for our model-based navigation policies.E.2 Low-Level Control for NavigationThe Hello Stretch software provides a native interface for controlling the linear and angular velocitiesof the differential-drive robot base. While we do expose an interface for users to control thesevelocities directly, it is desireable to have desired short-term goals as a more intuitive action space forpolicies, and to make them update-able at any instant to allow for replanning.Thus, we implemented a velocity controller that produces continuous velocity commands that movesthe robot to an input goal pose. The controller operates in a heuristic manner: by rotating therobot so that it faces the goal position at all times while moving towards the goal position, and thenrotating to reach the goal orientation once goal position is reached. The velocities to induce thesemotions are inferred with a trapezoidal velocity profile and some conditional checks to prevent itfrom overshooting the goal.Limitations The Fast Marching Method-based motion planning from prior work [ 2] that wedescribe in Sec. E.2. It assumes the agent is a cylinder, and therefore is much more limited in whereit can navigate than, e.g., a sampling based motion planner like RRT-connect [ 104] which can takeorientation into account. In addition, our semantic mapping requires a list of classes for use withDETIC [ 27]; instead, it would be good to use a fully open-vocabulary scene representation likeCLIP-Fields [ 11], ConceptFusion [ 15], or USA-Net [ 12], which would also improve our motionplanning significantly.E.3 Heuristic Grasping PolicyFigure 15: Grasping tests in various lab environments. To provide a strong baseline, we tuned thegrasp policy to be highly reliable given the Stretch’s viewpoint, on a variety of objects.Numerous powerful grasp generation models have been proposed in the literature, such as GraspNet-1Billion [ 99], 6-DOF GraspNet [ 98], and Contact-GraspNet [ 97]. However, for transparency, repro-25Figure 16: An example of the robot navigating to a goal_receptacle (sofa) and using the heuristicplace policy to put down the object (stuffed animal). Heuristic policies provide an interpretable andeasily extended baseline.ducibility, and ease of installation, we implement a simple, heuristic grasping policy, which assumesa parallel gripper performing top-down grasps. Heuristic grasp policies appear throughout roboticsresearch (e.g. in TidyBot [ 62]). In our case, the heuristic policy voxelizes the point cloud, and choosesareas at the top of the object where points exist, surrounded by free space, in order to grasp. Fig. 15shows the simple grasp policy in action and additional details are presented in Sec. E.3. This policyworks well on a wide variety of objects, and we saw comparable performance to the state-of-the-artopen-source grasping models we tested [97, 99].The intuition is to identify areas where the gripper fingers can descend unobstructed into two sides ofa physical part of the object, which we do through a simple voxelization scheme. We take the top10% of points in an object, voxelize at a fixed resolution of 0.5cm, and choose grasps with free voxels(where fingers can go) on either side of occupied voxels. In practice, this achieved a high successrates on a variety of real objects.The procedure is as follows:1. Given a target object point cloud, convert the point cloud into voxels of size 0.5 cm.2. Select top 10% occupied voxels with the highest Z coordinates.3. Project the selected voxels into a 2-D grid.4.Consider grasps centered around each occupied voxel, and identify three regions: two wherethe gripper fingers will be and one representing the space between the fingers.5.Score each grasp based on 1) how occupied the region between the fingers is, and 2) howempty the two surrounding regions are.6.Perform smoothing on the grasp scores to reject outliers (done by multiplying scores withadjacent scores).7. Output grasps with final scores above some threshold.We compared this policy to other methods like ContactGraspnet [ 97], 6-DoF Graspnet [ 98,101], andGraspnet 1-Billion [ 99]. We saw more intermittent failures due to sensor noise using these pretrainedmethods, even after adapting the grasp offsets to fit to the Hello Robot Stretch’s gripper geometry. Inthe end, we leave training better grasp policies to future work.E.4 Heuristic Placement PolicyAs with grasping, a number of works on stable placement of objects have been proposed in the litera-ture [ 102,17]. To provide a reasonable baseline, we implement a heuristic placement strategy thatassumes that the end-receptacle is at least barely visible when it takes over; projects the segmentationmask onto the point cloud and chooses a voxel on the top of the object. Fig. 16 shows an example ofthe place policy being executed in the real world.26Specifically, our heuristic policy is implemented as such:1.Detect the end-receptacle in egocentric RGB observations (using DETIC [ 27]), projectpredicted image segment to a 3D point cloud using depth, and transform point cloud to robotbase coordinates using camera height and tilt.2.Estimate placement point: Randomly sample 50 points on the point cloud and choose onethat is at the center of the biggest (point cloud) slab for placing objects. This is done byscoring each point based on the number of surrounding points in the X/Y plane (Z is up)within a 3 cm height threshold.3.Rotate robot for it to be facing the placement point, then move robot forward if it is morethan 38.5 cm away (length of retracted arm + approximate length of the Stretch gripper).4. Re-estimate placement point from this new robot position.5.Accordingly, set arm’s extension and lift values to have the gripper be a few cm aboveplacement position. Then, release the object to land on the receptacle.E.5 Navigation PlanningOur heuristic baseline extends prior work [ 2], which was shown to work in a wide range of humanenvironments. We tune it for navigating close to other objects and extended it to work in ourcontinuous action space – challenging navigation aspects not present in the original paper. Thebaseline has three components:Semantic Mapping Module. The semantic map stores relevant objects, explored regions, andobstacles. To construct the map, we predict semantic categories and segmentation masks of objectsfrom first-person observations. We use Detic [ 27] for object detection and instance segmentation andbackproject first-person semantic segmentation into a point cloud using the perceived depth, bin itinto a 3D semantic voxel map, and finally sum over the height to compute a 2D semantic map.We keep track of objects detected, obstacles, and explored areas in an explicit metric map of theenvironment from [ 105]. Concretely, it is a binary KxMxMmatrix where MxMis the mapsize and Kis the number of map channels. Each cell of this spatial map corresponds to 25cm2(5cmx5cm) in the physical world. Map channels K=C+ 4where Cis the number of semantic objectcategories, and the remaining 4channels represent the obstacles, the explored area, and the agent’scurrent and past locations. An entry in the map is one if the cell contains an object of a particularsemantic category, an obstacle, or is explored, and zero otherwise. The map is initialized with allzeros at the beginning of an episode and the agent starts at the center of the map facing east.Frontier Exploration Policy. We explore the environment with a heuristic frontier-based explorationpolicy [ 106]. This heuristic selects as the goal the point closest to the robot in geodesic distancewithin the boundary between the explored and unexplored region of the map.Navigation Planner. Given a long-term goal output by the frontier exploration policy, we use theFast Marching Method [ 107] as in [ 105] to plan a path and the first low-level action along this pathdeterministically. Although the semantic exploration policy acts at a coarse time scale, the planneracts at a fine time scale: every step we update the map and replan the path to the long-term goal. Therobot attempts to plan to goals if they have been seen; if it cannot get within a certain distance of thegoal objects, then it will instead plan to a point on the frontier.Navigating to objects on start_receptacle .Since small objects ( e.g.action_figure ,apple )can be hard to locate from a distance, we leverage the typically larger start_receptacle goals forfinding objects. We make the following changes to the original planning policy [108]:1.If object and start_receptacle co-occur in at least one cell of the semantic map, plan toreach the object2.If the object is not found but start_receptacle appears in the semantic map after exclud-ing the regions within 1m of the agent’s past locations, plan to reach the start_receptacle27Figure 17: Real-world examples (also see Fig 2). Our system is able to find held-out objects in anunseen environment and navigate to receptacles in order to place them, all with no information aboutthe world at all, other than the relevant classes. However, we see this performance is highly dependenton perception performance for now; many real-world examples also fail due to near-miss collisions.3. Otherwise, plan to reach the closest frontierIn step 2, we exclude the regions that the agent has been close to, to prevent it from re-visiting alreadyvisited instances of start_receptacle .E.6 Navigation LimitationsWe implemented a navigation system that was previously used in extensive real-world experiments [ 2],but needed to tune it extensively for it to get close enough to objects to grasp and manipulate them.The original version by Gervet et al. [ 2] was focused on finding very large objects from a limitedset of only six classes. Ours supports many more, but as a result, tuning it to both be able to graspobjects and avoid collisions in all cases is difficult.This is partly because the planner is a discrete planner based on the Fast Marching Method [ 107],which cannot take orientation into account and relies on a 5cm discretization of the world. ampling-based motion planners like RRT-Connect [ 104], or like that used in the Task and Motion Planningliterature [ 100,8], may offer better solutions. Alternately, we could explore optimization-basedplanners specifically designed for open-vocabulary navigation planning, as has recently been pro-posed [12].Our navigation policy relies on accurate pose information from Hector SLAM [ 103], and unfortunatelydoes not handle dynamic obstacles. It also models the robot’s location as a cylinder; the Stretch’scenter of rotation is slightly offset from the center of this cylinder, which is not currently accountedfor. Again, sampling-based planners might be better here.F Reinforcement Learning BaselineWe train four different RL policies: FindObject ,FindReceptacle ,GazeAtObject , andPlaceObject .28F.1 Action SpaceF.1.1 Navigation SkillsFindObject andFindReceptacle are, collectively, navigation skills. For these two skills, we usethe discrete action space, as mentioned in Sec. D.6. In our experiments, we found the discrete actionspace was better at exploration and easier to train.F.1.2 Manipulation SkillsFor our manipulation skills, we using a continuous action space to give the skills fine grained control.In the real world, low-level controllers have limits on the distance the robot can move in any particularstep. Thus, in simulation, we limit our base action space by only allowing forward motions between10-25 cm, or turning by 5-30 degrees in a single step. The head tilt, pan and gripper’s yaw, roll andpitch can be changed by at most 0.02-0.1 radians in a single step. The arm’s extension and lift can bechanged by at most 2-10cm in a single step. We learn by teleporting the base and arm to the targetlocations.F.2 Observation SpacePolicies have access to depth from the robot head camera, and semantic segmentation, as well as therobot’s pose relative to the starting pose (from SLAM in the real world), camera pose, and the robot’sjoint states, including the gripper. RGB image is available to the agent but not used during training.F.3 Training SetupAll skills are trained using a slack reward of -0.005 per step, incentivizing completion of task usingminimum number of steps. For faster training, we learn our policies using images with a reducedresolution of 160x120 (compared to Stretch’s original resolution of 640x480).F.3.1 Navigation SkillsWe train FindObject andFindReceptacle policies for the agent to reach a candidate object ora candidate target receptacle respectively. The training procedure is the same for both skills. Wepass in the CLIP [ 14] embedding corresponding with the goal object, as well as segmentation maskscorresponding with the detected target objects. The agent is spawned arbitrarily, but at least 3 metersfrom the target, and must move until within 0.1 meters of a goal “viewpoint,” where the object isvisible.Input observations: Robot head camera depth, ground-truth semantic segmentation for all receptaclecategories (receptacle segmentation), robot’s pose relative to the starting pose, joint sensor givingstates of camera and arm joints. We implement object-level dropout for the semantic segmentationmask, where each object has a probability of 0.5 of being left out of the mask. In addition, the inputobservation space includes the following:•Goal specification: ForFindObject , we pass in the CLIP embedding of the target objectand the start receptacle category. For FindReceptacle , we pass in the goal receptaclecategory.•Goal segmentation images: During training, the simulator provides ground truth goalobject segmentation; on the real robot, these are predicted by DETIC [ 27]. For FindObject ,we pass in two channels: one showing all instances of candidate objects, one showing allinstances of candidate start receptacles. For FindReceptacle , we pass a single channelshowing all instances of candidate goal receptacles. We implement a similar object-leveldropout procedure here as we did for the receptacle segmentation.Initial state: The agent is spawned at least 3m away from candidate object or receptacle. It starts in“navigation mode,” with the robot’s head facing forward.29Actions: The policy predicts translation and rotation waypoints, as well as a discrete stop action.Success condition: The agent should call the discrete stop action when it reaches within 0.5m of agoal view point. The agent should be facing the target: the angle between agent’s heading directionand the ray from robot to center of the closest candidate object should be no more than 15 degrees.Reward: Assume at time step t, the geodesic distance to the closest goal is given by d(t), theangle between agent’s heading direction and the ray from agent to closest goal is given by θ(t), anddid_collide (t)indicates if the action the agent took at time t−1resulted in a collision at time t. Thetraining reward is given by:RFindX (t) =α[d(t−1)−d(t)] +β1[d(t)≤Dclose][θ(t−1)−θ(t)] +γ1[did_collide (t)]withα= 1,β= 1,γ= 0.3andDclose= 3.F.3.2 GazeAtObjectTheGazeAtObject skill starts near the object, and provides some final refinement steps until theagent is close enough to call a grasp action, i.e. it is in arm’s length of the object and the object iscentered and visible. The agent needs to move closer to the object and then adjust its head tilt untilthe candidate object is close and centered. It makes predictions to move and rotate the head, as wellas to center the object and make sure it’s within arm’s length, so that the discrete grasping policy canexecute.TheGazeAtObject skill is supposed to start off from locations and help reach a location withinarm’s length of a candidate object. This is trained by first initialising the agents close to candidatestart receptacles. The agent is then tasked to reach close to the agent and adjust its head tilt such thatthe candidate object is close and centered in the agent’s camera view. We next provide details on thetraining setup.Input observations: Ground truth semantic segmentation of candidates objects, head depth sensor,joint sensor giving all head and arm joint states, sensor indicating if the agent is holding any object,clip embedding for the target object name.Initial state: The robot again starts in “navigation mode,” with its arm retracted, with the gripperfacing downwards, and with the head/camera facing the base, base at an angle of 5 degrees of thecenter object and on one of the “viewpoint” locations pre-computed during episode generation. Theobject will therefore be assumed to be visible.Actions: This policy predicts base translation and rotation waypoints, camera tilt, as well as a discrete“grasp” action.Success condition: The center pixel on the camera should correspond to a valid candidate object andthe agent’s base should be within 0.8m from the object.Reward: We train the gaze-policy mainly with a dense reward based on distance to goal. Specifically,assuming the distance of the end-effector to the closest candidate goal at time tisd(t)(in metres),the agent receives a reward proportional to d(t−1)−d(t). Further, when the agent reaches with0.8m, we provide an additional reward for incentivizing the agent to look towards the object.Letθ(t)denote the angle (in radians) between the ray from agent’s camera to the object and camera’snormal. Then the reward is given as:RGaze(t) =α[d(t−1)−d(t)] +β1[d(t)≤γ]cos(θ(t))withα= 2,β= 1andγ= 0.8in our case.The agent receives an additional positive reward of 2once the episode succeeds and receives anegative reward of −0.5for centering its camera towards a wrong object.30F.3.3 PlaceObjectFinally, the robot must move its arm in order to place the object on a free spot in the world. In thiscase, it starts at a viewpoint near a goal_receptacle . It must move up to the object and open itsgripper in order to place the object on this surface.Input observations: Ground truth segmentation of goal receptacles, head depth sensor, joint sensor,sensor indicating if the agent is holding any object, CLIP [ 14] embedding for the name of objectbeing held.Initial configuration: Arm retracted, with gripper down and holding onto an object, head facing thebase. The agent is spawned on a viewpoint with its base facing the object with an error of at most 15degrees.Actions: Base translation and rotation waypoints, all arm joints (arm extension, arm lift, gripper yaw,pitch and roll), a manipulation mode action that can be invoked only once in an episode to turn theagent’s head towards the arm and rotate the base left by 90 degrees. The agent is not allowed to moveits base while in manipulation mode.Success condition: The episode succeeds if the agent releases the object and the object stays on thereceptacle for 50 timesteps.Reward: The agent receives a positive sparse reward of 5when it releases the object and the objectcomes in contact with a target receptacle. Additionaly, we provide a positive reward of 1for eachstep the object stays in contact with the target receptacle. It receives a negative reward of −1if theagent releases the object but the object does not come in contact with the receptacle.F.4 ConceptFusionIn the main paper, we introduced two key approaches based on end-to-end reinforcement learningand a heuristic baseline. Both methods are dependent on the detection results generated by a readilyavailable open vocabulary object detector [ 27]. Notably, these 2D detection models do not leverageinformation from prior time steps to inform their detection decisions.In order to address these limitations, we explored the application of ConceptFusion [ 15], an open-set scene representation technique. ConceptFusion harnesses foundation models like CLIP [ 14],DINO [ 109], and others to construct 3D maps from multiple images. For our experimentation, weemployed the open-source implementation of ConceptFusion, which utilizes the Segment AnythingModel (SAM) [ 110] for object segmentation in RGB images and CLIP for feature extraction fromeach segmentation mask. It’s important to note that our experiments were conducted in a simulatedenvironment, obviating the need for GradSLAM [ 111], as we had access to ground truth depth mapsand pose information to support our map construction efforts.During our initial experimentation, we observed that ConceptFusion demanded significant com-putational resources and memory, with processing times reaching up to 5 seconds per frame formap construction. Remarkably, it’s worth noting that the authors of ConceptFusion have recentlypublished a new paper titled "ConceptGraphs: Open-V ocabulary 3D Scene Graphs for Perception andPlanning," [ 112] which addresses some of the computational challenges we encountered. However,we leave the exploration of ConceptGraphs as a potential avenue for future research.G Additional AnalysisHere, we provide some additional analysis of the different skills we trained to complete the Open-V ocabulary Mobile Manipulation task.31 Pick a box from a stand and place it on a chair. Episode start Find object Find receptacle Place object Pick a multiport hub from a stool and place it on a table. Pick a toy from a table and place it on a stool. Figure 18: We show multiple executions of the Open-V ocabulary Mobile Manipulation task in avariety of simulated environments.Nav. Manip. Perception FindObj Gaze FindRec Place TotalHeuristic Heuristic Ground Truth 291.8 - 65.5 8.4 360.5Heuristic RL Ground Truth 293.5 19.4 64.3 84.4 438.7RL Heuristic Ground Truth 295.1 - 105.0 7.0 401.6RL RL Ground Truth 302.4 25.7 112.8 45.9 455.2Heuristic Heuristic DETIC [27] 335.0 - 29.5 6.7 361.8Heuristic RL DETIC [27] 330.1 152.0 27.5 68.2 556.5RL Heuristic DETIC [27] 509.5 - 153.3 7.1 610.4RL RL DETIC [27] 539.1 101.3 124.4 33.7 634.7Table 5: The number of steps that the agent takes performing each of the skills for different baselines.Note that here we only consider the cases where the skill terminates. The last column gives the totalnumber of steps the agent takes on average for executing the four skills.G.1 Number of steps taken in each stage by different baselinesTable 5 shows the number of steps taken by each skill in our baseline. With DETIC perception, weobserved that the RL skills explored less efficiently than our simple heuristic-based planner; thistranslates to far fewer steps taken in successful episodes, although because RL exploration essentially“gives up” if an object isn’t nearby, it can take lots of steps in many situations. In the real world, wesaw similar behavior - sometimes, the RL policies would not explore enough to be able to find a goalat all.32Nav. Manip. Perception FindObj Gaze Pick FindRec Place Place terminatesHeuristic Heuristic Ground Truth 100.0 - 65.1 65.1 62.1 62.1Heuristic RL Ground Truth 100.0 65.6 64.3 64.3 61.3 52.2RL Heuristic Ground Truth 100.0 - 76.3 76.2 66.8 66.8RL RL Ground Truth 100.0 77.0 74.5 74.5 65.1 60.6Heuristic Heuristic DETIC [27] 100.0 - 34.7 34.7 31.1 31.1Heuristic RL DETIC [27] 100.0 33.9 27.2 27.2 24.4 17.6RL Heuristic DETIC [27] 100.0 - 32.9 32.7 24.2 24.2RL RL DETIC [27] 100.0 34.7 24.9 24.8 18.1 15.3Table 6: We report the percentage of times each skill gets invoked for each of the different baselines.The last column gives the percentage of times the agent finishes executing all skills.FindObj Success. PickObj Success. FindRec Success Overall SuccessNav. Manip. Perception SC,UI UC,UI All SC,UI UC,UI Total SC,UI UC,UI All SC,UI UC,UI AllHeuristic Heuristic Ground Truth 50.9 53.2 54.1 46.4 47.2 48.5 27.5 30.0 31.5 4.1 5.2 5.1Heuristic RL Ground Truth 54.9 58.2 56.5 48.6 55.1 51.5 39.0 37.3 42.3 14.5 12.0 13.2RL Heuristic Ground Truth 67.1 64.1 65.4 55.0 51.0 54.8 44.8 44.8 43.7 6.2 7.8 7.3RL RL Ground Truth 68.4 65.8 66.6 63.7 57.6 61.1 54.7 49.0 50.9 15.7 14.4 14.8Heuristic Heuristic DETIC [27] 22.2 21.1 28.7 12.5 10.5 15.2 3.2 3.3 5.3 0.9 0.7 0.4Heuristic RL DETIC [27] 22.4 22.2 29.4 11.9 11.8 13.2 5.1 3.5 5.8 0.3 1.4 0.5RL Heuristic DETIC [27] 18.7 23.0 21.9 9.9 11.8 11.5 5.8 5.3 6.0 0.3 0.0 0.6RL RL DETIC [27] 21.5 20.7 21.7 10.9 11.0 10.2 6.9 6.2 6.2 1.0 0.7 0.4Table 7: Performance breakdown by seen and unseen categories, and compared to overall performance.In our baselines, we relied heavily on a pretrained object detector for generalization, so we don’t seea dramatic difference in performance between seen and unseen objects.Next, we observe that the Gaze and Place policies, which were trained with ground truth perception,take significantly longer to terminate with DETIC perception.Finally, in Table 6, we look at the percentage of times the agent attempts each of the different skills.We find that the RL trained FindObj skill terminates more often than the heuristic FindObj skill andepisodes terminate less frequently with DETIC perception when compared to GT perception.G.2 Performance on Seen vs. Unseen Object CategoriesTable 7 shows results broken down by seen vs. unseen instances, and seen vs. unseen categories.Specifically we look at these two pools of objects from the validation set:•SC,UI: Seen category, unseen instance. An object of a class that appeared in the trainingdata (e.g., “cup”), but not a specific “cup” that appeared in the training data.•UC,UI: Unseen instance of an unseen category; an object of a type that did not appear inthe training data at all.In general, because we are relying on DETIC and not training our own semantic perception for thisbaseline, we do not see a large difference between the two categories of object.G.2.1 Example DETIC predictionsIn Table 5, we observe that the Gaze policy takes a significantly longer time to terminate withDETIC [ 27] perception. The gaze policy (see Fig. 19) tries to center the agent on the object ofinterest by allowing the agent to move its base and camera tilt. For this, it relies on DETIC’s abilityto detect novel objects. Now, we visualize DETIC segmentations of agent’s egocentric observationsby placing agent at the points where the Gaze skill is expected to start: the object ’s viewpoints. Weobserve that while DETIC succeeds in a few cases, it fails at consistently detecting the objects in theegocentric frame.33Figure 19: RL Gaze skill in action: The agent is allowed to move its base and change its camera tiltto get closer to object and bring object at the center of its camera frameHuman Commercially Manipulation ApproximateName Mobile Sized Safe Available DOF CostBoston Dynamics Spot ✔ ✖ ✖ ✔ 7 $200,000Franka Emika Panda ✖ ✖ ✓ ✔ 7 $30,000Locobot ✔ ✖ ✔ ✖ 5 $5,000Fetch ✔ ✔ ✓ ✖ 7 $100,000Hello Robot Stretch ✔ ✔ ✔ ✔ 4 $19,000Stretch with DexWrist ✔ ✔ ✔ ✔ 6 $25,000Table 8: Notes on platform selection. We chose the Stretch with DexWrist as a good compromisebetween manipulation, navigation, and cost, while being human-safe and approximately human-sized.H Hardware SetupHere, we will discuss choices related to the real-world hardware setup in extra detail along withinformation about the tools that we use for the visualization on the robot. This appendix containsnotes on how to set up the robotics stack in the real world, useful tools that we contribute, and somebest practices for development. Setting up mobile robots is hard, and one of the main goals of theHomeRobot project is to make it both easy and somewhat affordable for researchers.H.1 Hardware ChoiceWe describe some options for commercially available robotics hardware in Tab. 8. While the FrankaEmika Panda is not a mobile robot, we include it here because it’s a very commonly used platform inboth industrial research labs and at universities, making its price a fair comparison point for what isreasonable.34Figure 20: Visualization of groundtruth and DETIC [ 27] segmentation masks for agent’s egocentricRGB observations. Note that we use a DETIC vocabulary consisting of the fixed list of receptaclecategories and target object name. We observed that DETIC often fails to accurately detect all theobjects present in the given frame.H.2 Robot SetupOne challenge with low-cost mobile robots is how we can run GPU- and compute-intensive models toevaluate modern AI methods on them. The Stretch, like many similar robots, does not have onboardGPU, and will always have more limited compute than is available on a similar workstation.As described in Sec. 4, we address this with a simple network configuration shown in Fig. 21. Thereare three components:1.Thedesktop running code – in our case, the eval_episode.py script from HomeRobot –which connects to a remote mobile manipulator.2.The dedicated router – an off-the-shelf consumer router, such as a Netgear Nighthawkrouter. This should ideally be dedicated for your robot and desktop setup to ensure goodperformance.3. The mobile robot itself: a Stretch with DexWrist, as described above.After the robot is configured, then you just need to run one script, a ROS launch file, as described inthe HomeRobot startup instructions, which can be done over SSH. Then, with a properly configuredrobot and router, you can visualize information on the desktop side, showing the robot’s position,map from SLAM, and cameras. On the robot side, the only necessary command is:roslaunch home_robot_hw startup_stretch_hector_slam.launch35Figure 21: Network setup diagram for HomeRobot. We can run visualizations on a GPU-enabledworkstation while running only the necessary code on a robot for low-level control and SLAM.Checking network performance. We describe the visualization tools available briefly in the nextsection, but to check that the setup is working properly, you can start rviz and wave your hand infront of the robot – you should see minimal latency when waving a hand in front of the camera.Timing between the robot and the remote workstation. We use ROS [ 113] as our communicationslayer, and to implement low-level control on the robot. This also provides network communication.However, due to potential latency between the robot and the desktop, we also need to make sure thatobservations are up to date.We set up the robot to block after executing most navigation motions, in order to make this processsimpler until there is an up to date image observation from the robot side. This means that timingbetween the robot and the workstation is extremely important: if we do not have up-to-date timing,we might have SLAM poses and depth measurements that do not match, which will lead to worseperformance.We solved this by having a clock on the robot side publish its time over ROS, and configuring allsystems to use this ROS master clock instead of system time. This prevents the user from having toworry about Linux time synchronization protocols like NTP when setting up the robot for the firsttime.36H.3 Visualizing The RobotFigure 22: Exploring a real-world apartment during testing. The robot uses Detic [ 27] to perceivethe world and update a 2D map (center) which captures where it’s seen relevant classes, and whichobstacles exist; detections aren’t always reliable, especially given a large and changing vocabulary ofobjects that we care about. In the HomeRobot stack, we provide a variety of tools for visualizing andimplementing policies, including integration of RVIZ (right).We use RVIZ, a part of ROS, to visualize results and progress. Fig. 22 shows three different outputsfrom our system: on the far left, an image from the test environment being processed by Detic; in thecenter, a top-down map generated by the navigation planner described in Sec. E.2; and on the right,an image from RVIZ with the point cloud from the robot’s head camera registered against the 2Dlidar map created by Hector SLAM.One advantage of the HomeRobot stack is that it is designed to work with existing debugging tools- especially ROS [ 113]. ROS is a widely-used framework for robotics software development thatcomes with a lot of online resources, official support from Hello Robot, and a rich and thrivingopen-source community with wide industry backing.H.4 Using The Stretch: Navigation vs. Position ModeWe leave API documentation to the HomeRobot code base, but want to note one other complexitywhen using the robot. Stretch’s manipulator arm is pointed to the right of its direction of motion,which means that it cannot both look where it is going and manipulate objects at once. This allows therobot to be lower cost and fit the human profile - more information on the robot’s design is availablein other work [22].However, it’s something important to consider when trying to control Stretch to perform various tasks.We use Stretch in one of two modes:•Navigation mode: the robot’s camera is facing forward; we use reactive low-level controlfor navigation; the robot can rotate in place, roll backward, and will reactively track goalssent from the desktop.•Manipulation mode: the robot’s camera is facing towards its arm; we do not use reactivelow-level control for navigation and do not rotate the base. Instead, we treat the robot’s baseas an extra, lateral degree of freedom for manipulation.This is especially relevant when grasping or placing; it means that, for our heuristic policies, the robottransitions into manipulation mode after moving close enough to the goal, and may track slightly tothe left or the right, in order to act as if it had a full 6dof manipulator.All in all, these changes make the low-cost robot more capable and easier to use for a variety oftasks [12, 24, 25].37 |
CnKf9TyYtf2 | Few-Shot In-Context Imitation Learningvia Implicit Graph AlignmentVitalis Vosylius and Edward JohnsThe Robot Learning LabImperial College LondonUnited Kingdomvitalis.vosylius19@imperial.ac.ukAbstract: Consider the following problem: given a few demonstrations of atask across a few different objects, how can a robot learn to perform that sametask on new, previously unseen objects? This is challenging because the largevariety of objects within a class makes it difficult to infer the task-relevant re-lationship between the new objects and the objects in the demonstrations. Weaddress this by formulating imitation learning as a conditional alignment prob-lem between graph representations of objects. Consequently, we show thatthis conditioning allows for in-context learning, where a robot can perform atask on a set of new objects immediately after the demonstrations, without anyprior knowledge about the object class or any further training. In our experi-ments, we explore and validate our design choices, and we show that our methodis highly effective for few-shot learning of several real-world, everyday tasks,whilst outperforming baselines. Videos are available on our project webpage athttps://www.robot-learning.uk/implicit-graph-alignment .Keywords: Few-Shot Imitation Learning, Graph Neural Networks. . .A few demonstrations of a task with different sets of objects Immediate task completion by execution of the found alignment trajectory Implicit Graph Alignment Diverse training dataset of task-agnostic alignments Novel test objects Figure 1: Given a few (e.g. three) demonstrations of a new task across multiple object instances(left), a robot can perform this same task with previously unseen objects (right), based on a graphalignment energy model trained on a diverse dataset of task-agnostic object alignments (middle).1 IntroductionFor a robot to perform a manipulation task involving two objects, it must infer task-relevant parts ofthese objects and align them throughout the trajectory in a task-specific way. For example, the spoutof a teapot should be aligned with the opening of a mug when pouring tea. However, objects withina class typically have slightly different shapes, so inferring the parts which should be aligned for agiven task is challenging due to the diversity of object shapes. This becomes even more challengingin an imitation learning setting, where this task-specific alignment should be inferred only fromdemonstrations, and ideally, without requiring any prior knowledge about the objects or their class.We approach this challenge as a few-shot in-context imitation learning problem, and learn a condi-tional distribution of alignments that can produce task-relevant alignments of novel objects from just7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.a few context examples, i.e. demonstrations. We propose to learn this distribution from point cloudobservations using a novel heterogeneous graph-based energy model, whose graph structure withtransformer-based attention is capable of representing intricate relations between different parts ofthe objects, and whose implicit energy-based learning is effective at capturing complex and multi-modal distributions. These design choices allow us to perform few-shot imitation learning on anyarbitrary objects and over any arbitrary number of demonstrations, using just a single trained model.But where can we obtain a large-scale dataset of task-relevant alignments between objects neededto train such a model? Our insight here is that simply by deforming pairs of arbitrary ShapeNet [1]objects in simulation, using an object augmentation function trained for a single object category ofchairs [2], we can create sensible new instances of objects of an arbitrary category and align themconsistently, providing us with the data required to train the aforementioned model. By trainingon objects from diverse categories and multiple alignments between them, we are able to predicttask-relevant alignments for novel object pairs, all from the same trained model.We evaluated our method both with simulation and real-world experiments and analysed its abilityto align novel objects in a few-shot imitation learning setting, where each demonstration consists ofone or more waypoints in a trajectory. Our real-world results show that for everyday tasks, such aspouring from a teapot into a mug, sweeping with a brush, and hanging a cap onto a peg, we achieve80% task success rate under the following conditions: (1) only four demonstrations are provided,(2) the test objects are novel and unseen during the demonstrations, and (3) we assume no priorknowledge about the objects or class of objects. These results, when compared to the baselines,validate our method as an effective few-shot imitation in-context learning framework, capable ofefficiently learning everyday robotic tasks from just a few demonstrations.2 Related WorkThe field of imitation learning has seen various approaches that aim to solve robotic tasks based ona single demonstration [3, 4, 5, 6]. However, these methods are typically limited to scenarios wherethe objects used in the demonstrations and during deployment are either the same or sufficiently sim-ilar. This lack of flexibility prevents them from being applied to category-level manipulation tasks.Other methods tackle this limitation by leveraging category-level keypoints, which can be anno-tated [7, 8], predicted [9], or obtained through self-supervised correspondence methods [10, 11, 12].However, selecting such keypoints in a task-agnostic manner is challenging, thereby restricting theapplicability of these approaches. Transporter Nets [13], utilises learned feature maps and templatematching to identify pick-and-place poses, but is restricted to pick-and-place tasks and struggles topredict arbitrary 6DoF poses. We also train a function to predict the quality of an object alignment,but we do so in an unrestricted SE(3)space, enabling us to handle much more complex and arbitrarytasks defined by the demonstrations. Graph neural networks [14, 15, 16, 17] and implicit learning offunctions [18, 19, 20, 21] have recently gained significant attention for robot manipulation, and westudy a novel formulation of these ideas for few-shot imitation learning.The most closely related work to ours comes from Relational Neural Descriptor Fields (R-NDF)[22, 23] and TAX-Pose [24], both of which can generalise to novel instances within the same objectcategory. [22] achieves object alignment by matching descriptors extracted from a pre-trained occu-pancy network. However, it requires per-object category training and the annotation of task-relevantobject parts. Furthermore, [22] averages descriptors from multiple demonstrations, which can posechallenges when dealing with significantly different objects or multimodal demonstrations. [24] em-ploys cross-object attention for estimating dense cross-object correspondences. However, it requiresseparate network training for each new alignment between the objects, limiting its applicability. Incontrast to previously described methods, our approach learns a general conditional distribution ofalignments using an energy-based model, which can be applied to objects of unseen categories andarbitrary alignments without the need for additional training beyond the initial demonstrations.3 Method3.1 Problem Formulation & OverviewProblem Setting. We consider an imitation learning setting involving two objects, namely a graspedobject (or a robot gripper) and a target object , between which the interaction should occur. We do2not assume any prior knowledge about these objects, and denote their observations as OA, andOB. The alignment, or relative spatial configuration between these objects, is denoted as a generalrepresentationA(OA;OB), which should be independent of the global configurations of the objects,and focusses only on their relative alignment. As we will explain later, we use segmented pointclouds as observations OAandOBand devise a heterogeneous graph to represent A(OA;OB).Objective. Given an arbitrary task involving any two objects OAandOB, our goal is to infer a tra-jectory of alignments between them that would complete the task. We denote this trajectory as a se-quence of alignments A1:Ttest(OA;OB), whereTis the number of time steps in a trajectory. In general,to accomplish this, we would require knowledge of the task-conditional distribution p(A1:Ttestjtask),but defining this analytically is infeasible for all possible task and object combinations and frompartial point cloud observations. Instead, we propose to approximate it using a few samples fromthis distributionfA1:Tdemo(OiA;OiB)gNi, by learning a conditional distribution p(A1:TtestjfA1:TdemogNi),parameterised by . Here,Ndenotes the number of demonstrations.Deployment. When presented with novel objects OAandOB, our objective is to perform a taskby sampling from the learned conditional distribution, eliminating the need for task-specific trainingor fine-tuning. We sample from p(Atest(TtOA;OB)jfAtdemogNi))by finding a sequence oftransformationsT1:Twhich when applied to object A, yield alignments within the distribution.Assuming rigid grasps, we can directly apply this sequence of transformations to the robot’s end-effector, causing it to move the grasped object along the task-relevant trajectory. In this work, wetreat individual time steps in a trajectory independently, hence, going forward, we will drop thesuperscriptt, and refer to a set of Ndemonstrations asAdemo for conciseness.Figure 2: Illustration motivating the need for multiple demonstrations across different sets of objects,in order to generalise to unseen objects when inferring task-relevant alignments.Multiple Demonstrations Requirement. When the objects during test time are the same as duringthe demonstrations, sampling from the learned distribution would yield alignments identical to thedemonstrations (see Figure 2 left). However, if they differ from those in the demonstrations, thetask-relevant parts should be aligned consistently. Inferring relevant object parts for an arbitrarytask is infeasible from a single demonstration when objects used are different (see Figure 2 mid-dle). Therefore, we use multiple demonstrations with different objects from the same categories andmake an assumption that across the demonstrations, the relative alignment of the task-relevant partsremains consistent and can be inferred (see Figure 2 right). With an increasing number of demon-strations and a broader range of object shapes, the conditional distribution should converge towardsa Dirac delta function (). In cases where the demonstrations are multimodal or inconsistent, thedistribution becomes multimodal, representing different possible ways of completing the task.3.2 Generalisation Through Shape AugmentationTo learn a conditional distribution p(AtestjAdemo)which generalises to arbitrary alignments be-tween different instances of objects from the same category, we need a diverse dataset containingmultiple objects with different geometries aligned in a semantically consistent way. Obtaining sucha dataset would require a huge effort in annotation or solving the underlying semantic object align-ment problem in the first place. Instead, we propose to utilise correspondence-preserving shapeaugmentation and treat each deformation of the object as a new instance. The ability to gener-ate new instances of a specific category of objects and having privileged information regarding theground truth correspondences allows us to create an arbitrary number of alignments between the ob-jects with specific parts consistently aligned (see Figure 3). Although these deformations might not3represent realistic objects one might encounter in the real world, we hope that we can encapsulatethe true underlying distribution of objects within the convex hull of the produced deformations.Relevant Parts, that maintain the same alignments. Novel instances of those objects acquired through deformations Arbitrary objects from ShapeNet Figure 3: Two arbitrary objects and their deformed variations aligned based on specific parts. Colourmaps indicate dense correspondences, green and blue spheres highlight the parts used for alignment.In practice, we create a diverse dataset of alignments, as follows. 1) Random Object Selection : Werandomly select two objects from the ShapeNet dataset — representing OAandOB, and apply ran-domSE(3)transformations to them, creating an arbitrary alignment between them. 2) DeformationGeneration : To create variations of objects, we employ a correspondence-preserving implicit gener-ative model called DIF-Net [2]. By deforming the objects, we obtain different instances of the samecategory, each with its own unique shape characteristics. 3) Alignment Based on Correspondences :For each original object, we randomly sample a point on its surface. Using the known correspon-dences between the original and deformed objects, we align the deformed objects based on the localneighbourhood of points around the sampled point. This alignment ensures that certain parts of theobjects maintain the same relative pose consistently. 4) Rendering Point Clouds : Finally, we ren-der the point clouds of the aligned objects using simulated depth cameras, resulting in a sample ofconsistently aligned object pairs. The resulting dataset D=ffPiA;PiBgNigMjcontainsMsamples,where each sample consists of Naligned object pairs. We utilise these samples by using N1alignments as context and the left-out alignment as the target alignment for those object.3.3 Alignment RepresentationELocal Encoding Graph Creation Graph Transformer Test Alignment Demo 1 Inputs Local Geometry Embeddings Graph Representation Energy Final Alignment Intuition: Find a transformation that would align grasped and target objects consistent with the demonstrations by minimising the energy. Alignment Energy Model Optimisation ...Demo N Demo 1 Demo N Test Alignment ...Edge types: “Same-Object” – sinusoidal positional encoding “Cross-Object” – sinusoidal positional encoding “Directional-Conditional” – learnable features Figure 4: (Left) High-level structure of the model used to learn the alignment distribution. (Right)The learnt energy landscape in 2D, and the graph alignment during different optimisation steps.As hinted previously, we use point clouds PiAandPiBas the initial representation of the objects.To learn the proposed distribution p(AtestjAdemo), we need to reason about the geometries ofdifferent parts of the objects and their relative alignments. Directly using dense point clouds wouldmake this problem extremely computationally expensive. Therefore, we design a representation forA(OA;OB), that encodes different parts of the objects and the relative poses between them.First, we sample Kpoints from each point cloud using Furthest Point Sampling Algorithm andencode local geometry around them using the Set Abstraction (SA) layer [25], to obtain a set of fea-ture vectorsFand their positions pdescribing the whole object fFi;pigKi(see Figure 4). We utilise4Vector Neurons [26] which, together with the local nature of the SA layer, provide us with SE(3)-Equivariant embeddings F. To ensure that these embeddings describe the local geometry aroundthe sampled point, we pre-train this network as an implicit occupancy network [27], reconstructingthe encoded object. Note that each local embedding is used to reconstruct only a part of the object,reducing the complexity of the problem and allowing it to generalise more easily. We provide a fulldescription of the occupancy network used in Appendix B. To encode the relative alignment betweenthe two objects based on the sets of feature and position pairs, we then construct a heterogeneousgraph representation. This graph captures the relationships between different parts of the objects andfully describes their geometry. Each edge eijis assigned a feature vector representing the relativeposition between the nodes. To model high-frequency changes in the relative position between thenodes, we utilise positional encoding [28] to express edge features.3.4 Learning the Conditional Alignment DistributionWe use our devised alignment representation Aand learn the proposed conditional distributionp(AtestjAdemo)by employing a novel heterogeneous graph-based transformer energy model de-noted asE(Atest;Ademo). Intuitively, the energy-based model should compare AtestwithAdemo ,and predict if alignments are consistent (low energy) or not (high energy). Then, by minimising thepredicted energy with respect to Atest, we can find a desired alignment of test objects.Firstly, we combine the graphs describing AtestandAdemo by connecting nodes from demonstratedand test alignments with directional edges, equipped with learnable embeddings (see Figure 4).These connections effectively propagate information about the relative alignment of the objects dur-ing the demonstrations to the test observation. This enables the graph to capture the alignmentpatterns observed in the context and makes it suitable for making predictions about whether the testobservation is consistent with the demonstrations. To propagate the information through the graphin a structured manner, we utilise graph transformer convolutions [29], which, for a heterogenousgraph, can be viewed as a collection of cross-attention modules. We then learn p(AtestjAdemo)byemploying an InfoNCE-style [30] loss function, and minimise the negative log-likelihood of:^pAtestjAdemo;f^AjtestgNnegj=exp(E(Atest;Ademo))exp(E(Atest;Ademo)) +PNnegjexp(E(^Ajtest;Ademo))(1)HereNnegis a number of counter-examples, used to approximate the, otherwise intractable, nor-malisation constant. We create these counter-examples by applying SE(3)transformations to nodesin the graph describing OA, transforming both their position pAand feature vectors FA. Note thatwe are able to do this due to the SE(3)-equivariant property of our point cloud encodings. We obtainthe transformations using a combination of random and Langevin Dynamics [31, 32] sampling. Atinference, we sample from the learnt distribution by minimising the energy using gradient-basedoptimisation [33, 31] to find an SE(3)transformationTwhich, when applied to OA, would result inan alignment between the objects which is consistent with the demonstrations (see Figure 4 right).T= argminT2SE(3)E(Atest(TOA;OB);Ademo) (2)Equation 2 can be understood as follows: Adjust the pose of the object Aiteratively until the en-ergy is minimised, indicating that the alignment between the objects AandBbecomes consistentwith the demonstrated alignments. In practice, we use separate networks for finding orientationand translation alignments, optimising them sequentially. In Appendix C, we provide full detailsregarding the training of the proposed energy model, and optimisation on the SE(3)manifold. Us-ing a graph neural network with transformer-based attention allows us to capture complex relationalinformation between different parts of two objects, and dynamically handle an increasing numberof demonstrations during inference. The energy-based approach allows us to learn complex andmulti-modal distributions. It can capture the inherent variability and uncertainty in the alignmentprocess, allowing for different ways of aligning objects. This flexibility is crucial when multiplevalid alignments exist, e.g. when dealing with symmetrical objects or multi-modal demonstrations.54 ExperimentsTo thoroughly assess the effectiveness of our method in learning a general alignment distributionand its practicality in real-world robotic tasks, we conducted experiments in two distinct settings:(1) a simulation setting, where we have access to ground truth information, and (2) a real-worldsetting, where we evaluate the performance of the complete robotic pipeline on everyday robotictasks. A single model trained on 500Ksamples ( 5consistent alignments each) is used for all theexperiments, evaluating it by providing new demonstrations as context. Videos are available on ourproject webpage at https://www.robot-learning.uk/implicit-graph-alignment .4.1 Exploring Generalisation ModesOur first set of simulated experiments aims to answer the following question: Is our model capableof generalising and identifying task-relevant alignments when presented with both novel objects fromfamiliar categories and entirely novel object categories?Experimental Procedure. We constructed five distinct evaluation datasets, each introducing anadditional mode of generalisation requirement. The generation procedure follows the methodologyoutlined in Section 3.2, and only includes objects from non-symmetric categories in order to evaluatethe orientation accuracy of found alignments. The five evaluation datasets encompass the followingsamples: 1) Seen Alignments : This dataset comprises objects and their alignments that were seenduring training. 2) Unseen Alignments : In this dataset, objects that were seen during training arealigned in novel ways, introducing unfamiliar alignment configurations. 3) Unseen Object Instances :This dataset contains unseen instances of object categories that were encountered during training.4)Unseen Object Categorie s: Here, objects are from categories that were unseen during training. 5)Multi-Modal Demonstrations : This dataset focuses on multimodal demonstrations of objects fromunseen categories. This means that the demonstrations exhibit multiple modes, indicating differentpossible object alignments. Unseen , here refers to objects, alignments or categories, that have notbeen used during training. Every described dataset contains 1000 unique samples, each sample with5consistent object alignments, 4of which are used as the context, and 1as a ground truth label.Evaluation Metrics. Having access to the ground truth alignment of the objects, we report thetranslation and rotation errors of the predicted alignment, when starting point clouds are initialisedin random SE(3)poses. In the case of multi-modal demonstrations, we report the error to the closestground truth alignment. To put the later presented error metrics in perspective, object sizes in thepreviously described datasets range from 8cm to 35cm.Baselines. In our evaluation, we compare our method of Implicit Graph Alignment (IGA) againstseveral baselines and its own variants to demonstrate its effectiveness. 1) ICP, where the targetand grasped object point clouds are aligned to the closest match amongst the available demonstra-tions. It serves to validate the need to learn the alignment distribution. 2) TAX-Pose-DC , is a variantof TAX-Pose-GC [24], where instead of one-hot encoding, we use an average embedding of thedemonstrations as a conditional variable. For a fair comparison, this is our attempt to extend across-object correspondence approach to be conditional on demonstrations (the original TAX-Pose-GC conditional on a one-hot encoding of a desired placement). 3) R-NDF is Relational NeuralDescriptor Fields [22], which aims to validate our choice of using a heterogeneous graph represen-tation for the alignment of objects, instead of matching local descriptors of two objects as in R-NDF.4)Ours(DirReg) , is a variant of our method which, using our graph representation, directly regressesthe transformation Tinstead of learning an energy function. 5) Ours(DFM) is another variant ofour method, which predicts SE(3)distance to the ground truth alignment and learns a distance fieldinstead of an energy function. For a fair comparison, our method and all baselines were trained onthe same amount of data from a diverse range of object categories.(1) Seen Alignments (2) Unseen Alignments (3) Unseen Object Instances (4) Unseen Object Categories (5) Multi-Modal DemonstrationsTrans (cm) Rot (deg) Trans (cm) Rot (deg) Trans (cm) Rot (deg) Trans (cm) Rot (deg) Trans (cm) Rot (deg)ICP 15.8 ± 7.7 77.9 ± 27.1 15.3 ± 10.3 78.5 ± 28.1 14.9 ± 8.7 69.4 ± 29.0 16.5 ± 8.9 99.7 ± 34.9 15.0 ± 10.1 83.3 ± 26.4R-NDF 8.5 ± 3.2 43.1 ± 32.3 8.1 ± 1.7 37.9 ± 21.8 9.2 ± 6.3 46.5 ± 23.1 9.8 ± 5.4 52.1 ± 29.2 13.6 ± 7.6 96.4 ± 45.5Tax-Pose-DC 21.8 ± 9.9 105.6 ± 39.2 19.6 ± 10.4 117.2 ± 36.8 24.3 ± 9.8 105.4 ± 36.7 21.3 ± 8.9 126.3 ± 38.1 23.9 ± 9.4 123.3 ± 36.7Ours (DirReg) 4.1 ± 2.4 27.7 ± 6.4 4.9 ± 3.6 27.5 ± 9.6 5.2 ± 3.5 31.6 ± 11.2 6.7 ± 5.0 31.1 ± 10.7 13.1 ± 6.3 76.8 ± 26.4Ours (DFM) 3.2 ± 1.2 89.6 ± 42.8 3.7 ± 1.6 106.1 ± 57.1 3.7 ± 1.5 103.4 ± 56.8 4.4 ± 2.0 115.3 ± 61.8 10.4 ± 5.1 114.9 ± 58.7Ours (IGA) 2.1 ± 1.3 9.7 ± 6.4 2.3 ± 1.5 12.1 ± 5.6 2.3 ± 1.4 11.7 ± 6.1 3.0 ± 1.6 14.7 ± 5.3 3.9 ± 2.3 11.2 ± 6.7Table 1: Translation and rotation errors (means and standard deviations) of alignments for differentmodes of generalisation. Values of the best-performing method are presented in bold.6Results. Results shown in Table 1 reveal that using non-learning-based methods, such as ICP, andextending the cross-object correspondence approach to be conditional, yield unsatisfactory perfor-mance. The sub-optimal performance of R-NDF can be attributed to the high diversity within objectcategories and instances, as well as the arbitrary alignments between them (as it primarily focuseson finding alignments between objects in close proximity). Other variants of our approach showpromising results in terms of translation, but struggle to accurately determine the orientation. Ourproposed method, using an implicit energy-based model, excels in both translation and orientation,and across different modes of generalisation.4.2 Exploring Demonstration DiversityOur second set of simulated experiments aims to answer the following question: Which is moreimportant: the number of demonstrations, or the diversity of demonstrations?1 2 5 10 15Number of Demonstrations24681012Error, cmLow DiversityMedium DiversityHigh DiversityFigure 5: Translation error based on the number ofdemonstrations for 3 different sets of diversities.Experimental Procedure. We explore thisquestion by varying both the diversity and thenumber of demonstrations provided as context.We create 3different evaluation datasets ofdemonstrations, each with increasing diversityof objects. We vary the diversity of the ob-jects by increasing the random scaling range,as well as the scale of the latent vector , whichcontrols the amount of deformations applied bythe DIF-Net model [2]. For visualisation ofthe diversity of objects in the created evaluationdatasets see Appendix D.4.Results. Translation errors are shown in Fig-ure 5 (orientation errors follow a similar trendand can be seen in Appendix D.4). Note thata single model is used for all the evaluations.Analysis of Figure 5 reveals that the performance of our method does not improve when the numberof demonstrations increases, unless the diversity among those demonstrations also increases. How-ever, an increase in diversity can substantially enhance the performance. Notably, the performanceof our method remains relatively unchanged when provided with either 10 or 15 demonstrations,suggesting that the remaining errors are not attributable to insufficient information in the demon-strations. Instead, they likely stem from inherent errors of the learnt model or ambiguities present inthe generated datasets.4.3 Real-World ExperimentsThrough our real-world experiments, we aim to answer the following question: Can a robot learnreal-world everyday tasks from just a small number of human demonstrations?Real-World Setup. Our experiments are performed using the Franka Emika robot, along with threeRealSense D415 depth cameras. To obtain point clouds of the objects, the cameras first capture thepoint cloud of the target object with the robot not obstructing their view. Then, the robot moves itsgripper to a position where the grasped object is visible to both cameras, after which the camerascapture a point cloud of the grasped object.Tasks. We evaluate our approach on six different tasks, which can be seen in Figure 6. 1) Grasping .The goal is to grasp different pans by the handle. 2) Stacking . The goal is to stack two bowls. 3)Sweeping . The goal is to sweep marbles into a dustpan with a brush. 4) Hanging . The goal is to hanga cap onto a deformable stand. 5) Inserting . The goal is to insert a bottle into a shoe. 6) Pouring .The goal is to pour a marble into a mug. We provide success criteria in Appendix D.1.Experimental Procedure. Trajectories for Grasping andStacking tasks are defined by a singlewaypoint at the grasping or placing locations and executed with a scripted controller, consistentwith R-NDF [22] and TAX-Pose [24]. For these tasks, we provide 3 demonstrations. Trajectories forother tasks are defined by 3 waypoints, which need to be reached sequentially. For these tasks, weprovide 4 demonstrations each. Each demonstration is a unique combination of different grasped7Figure 6: Real-world tasks used to evaluate our method, and the corresponding objects used forproviding demonstrations and testing.and target objects. Notably, the Hanging task involves a deformable stand and deformable cap, bothof which are randomly deformed before each demonstration and test. We evaluate each task on 5different combinations / deformations of grasped and target test objects, repeating the evaluation 3times by starting the objects in different configurations, totalling 15 evaluations per task per method.No objects used to provide the demonstrations are used during the evaluation.Grasp Stack Sweep Hang Insert Pour Avg.ICP 9 / 15 14 / 15 8 / 15 0 / 15 7 / 15 2 / 15 44 ± 31%R-NDF 7 / 15 14 / 15 4 / 15 0 / 15 4 / 15 3 / 15 36 ± 29%IGA 14/ 15 15 / 15 8 / 15 12 / 15 11 / 15 12 / 15 80 ± 15%Table 2: Success rates of our method and two best perform-ing baselines in a real-world setting.Results. As we can see from Table 2,ICPandR-NDF baselines can be suc-cessfully used to complete GraspingorStacking tasks but struggle withmore challenging ones, e.g. Hangingor Pouring. This can be attributed tothe relatively high task tolerances andlimited variation in the object geome-tries between demonstrations and inference for these tasks. Our method, on the other hand, canconsistently complete all the evaluated tasks from just a few demonstrations, with the notable ex-ception of a Sweeping task. We hypothesise that this is due to major geometry differences betweendemonstration and test objects, that were not covered by the shape augmentation model.5 DiscussionLimitations. We now share the main limitations of our method, in order of importance. 1) As withmany similar works, we assume that segmented point clouds are available of sufficient observabilityto capture task-relevant object parts, and thus we require three depth cameras. However, our ex-periments showed that with real-world hardware and noisy observations, our method is still highlyeffective for everyday tasks. 2) Our current implementation of gradient-based sampling is not yetsuitable for reactive closed-loop control on today’s computing hardware, as inference can be time-consuming (up to 15 seconds). 3) We model trajectories as sparse waypoints, which may not besuitable for tasks requiring more complex dynamics. However, in future work, our method could beextended to track a much denser trajectory, including velocities. 4) We assume that objects remainrigidly grasped during a trajectory. Therefore, whilst we have shown that our method works withdeformable objects (the Hanging task), we have not yet extended this framework for objects whichmay deform during a task. 5) Our current formulation models a task as the interaction of 2 objects,which might not always be the case. However, we could extend our method to an arbitrary numberof objects, if OBis used to represent an observation of the entire environment. 6) Training of ourmodel relies on data generated using a heuristic method, which may introduce inaccuracies in thetraining samples and limit the model’s precision.Conclusions. We have proposed a method for few-shot imitation learning, which is able to performreal-world everyday tasks on previously unseen objects, from three or four demonstrations of thattask on similar objects. We achieved this by designing a novel graph alignment strategy based on anenergy-based model and in-context learning, which was empirically shown to outperform alternativemethods. Our experiments and videos highlight that this is also a practical method which enablesa robot to perform a task immediately after the demonstrations, without requiring any further datacollection or any prior knowledge about the object or class, using just a single trained network.8AcknowledgmentsThe authors thank Norman Di Palo, Kamil Dreczkowski, Ivan Kapelyukh, Pietro Vitiello, YifeiRen, Teyun Kwon, and Georgios Papagiannis for their valuable contributions through insightfuldiscussions. Additionally, special thanks to Georgios Papagiannis for developing the robot controllerused in our real-world experiments.References[1] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva,S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprintarXiv:1512.03012 , 2015.[2] Y . Deng, J. Yang, and X. Tong. Deformed implicit field: Modeling 3d shapes with learneddense correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 10286–10296, 2021.[3] Z. Mandi, F. Liu, K. Lee, and P. Abbeel. Towards more generalizable one-shot visual imitationlearning. In 2022 International Conference on Robotics and Automation (ICRA) , pages 2434–2444. IEEE, 2022.[4] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.In2021 IEEE international conference on robotics and automation (ICRA) , pages 4613–4619.IEEE, 2021.[5] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate once, imitate imme-diately (dome): Learning visual servoing for one-shot imitation learning. In 2022 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 8614–8621. IEEE,2022.[6] P. Vitiello, K. Dreczkowski, and E. Johns. One-shot imitation learning: A pose estimationperspective. In Conference on Robot Learning , 2023.[7] L. Manuelli, W. Gao, P. Florence, and R. Tedrake. kpam: Keypoint affordances for category-level robotic manipulation. In Robotics Research: The 19th International Symposium ISRR ,pages 132–157. Springer, 2022.[8] W. Gao and R. Tedrake. kpam 2.0: Feedback control for category-level robotic manipulation.IEEE Robotics and Automation Letters , 6(2):2962–2969, 2021.[9] Z. Qin, K. Fang, Y . Zhu, L. Fei-Fei, and S. Savarese. Keto: Learning keypoint representationsfor tool manipulation. In 2020 IEEE International Conference on Robotics and Automation(ICRA) , pages 7278–7285. IEEE, 2020.[10] P. R. Florence, L. Manuelli, and R. Tedrake. Dense object nets: Learning dense visual objectdescriptors by and for robotic manipulation. arXiv preprint arXiv:1806.08756 , 2018.[11] L. Manuelli, Y . Li, P. Florence, and R. Tedrake. Keypoints into the future: Self-supervised cor-respondence in model-based reinforcement learning. arXiv preprint arXiv:2009.05085 , 2020.[12] P. Florence, L. Manuelli, and R. Tedrake. Self-supervised correspondence in visuomotor policylearning. IEEE Robotics and Automation Letters , 5(2):492–499, 2019.[13] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[14] V . V osylius and E. Johns. Where to start? transferring simple skills to complex environments.InConference on Robot Learning , pages 471–481. PMLR, 2023.[15] M. Sieb, Z. Xian, A. Huang, O. Kroemer, and K. Fragkiadaki. Graph-structured visual imita-tion. In Conference on Robot Learning , pages 979–989. PMLR, 2020.9[16] I. Kapelyukh and E. Johns. My house, my rules: Learning tidying preferences with graphneural networks. In Conference on Robot Learning , pages 740–749. PMLR, 2022.[17] C. Wang, Y . Zhang, X. Zhang, Z. Wu, X. Zhu, S. Jin, T. Tang, and M. Tomizuka. Offline-online learning of deformation model for cable manipulation with graph neural networks. IEEERobotics and Automation Letters , 7(2):5544–5551, 2022.[18] D. Driess, J.-S. Ha, M. Toussaint, and R. Tedrake. Learning models as functionals of signed-distance fields for manipulation planning. In Conference on Robot Learning , pages 245–255.PMLR, 2022.[19] J.-S. Ha, D. Driess, and M. Toussaint. Learning neural implicit functions as object representa-tions for robotic manipulation. arXiv preprint arXiv:2112.04812 , 2021.[20] J. Urain, N. Funk, G. Chalvatzaki, and J. Peters. Se (3)-diffusionfields: Learning cost functionsfor joint grasp and motion optimization through diffusion. arXiv preprint arXiv:2209.03855 ,2022.[21] I. Huang, Y . Narang, R. Bajcsy, F. Ramos, T. Hermans, and D. Fox. Defgraspnets: Graspplanning on 3d fields with graph neural nets. arXiv preprint arXiv:2303.16138 , 2023.[22] A. Simeonov, Y . Du, Y .-C. Lin, A. R. Garcia, L. P. Kaelbling, T. Lozano-P ́erez, and P. Agrawal.Se (3)-equivariant relational rearrangement with neural descriptor fields. In Conference onRobot Learning , pages 835–846. PMLR, 2023.[23] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In2022 International Conference on Robotics and Automation (ICRA) , pages 6394–6400. IEEE,2022.[24] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. Tax-pose: Task-specific cross-pose esti-mation for robot manipulation. In Conference on Robot Learning , pages 1783–1792. PMLR,2023.[25] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Advances in neural information processing systems , 30, 2017.[26] C. Deng, O. Litany, Y . Duan, A. Poulenard, A. Tagliasacchi, and L. J. Guibas. Vector neu-rons: A general framework for so (3)-equivariant networks. In Proceedings of the IEEE/CVFInternational Conference on Computer Vision , pages 12200–12209, 2021.[27] L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger. Occupancy networks:Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 4460–4470, 2019.[28] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf:Representing scenes as neural radiance fields for view synthesis. Communications of the ACM ,65(1):99–106, 2021.[29] Y . Shi, Z. Huang, S. Feng, H. Zhong, W. Wang, and Y . Sun. Masked label prediction: Unifiedmessage passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509 ,2020.[30] A. v. d. Oord, Y . Li, and O. Vinyals. Representation learning with contrastive predictive coding.arXiv preprint arXiv:1807.03748 , 2018.[31] Y . Du and I. Mordatch. Implicit generation and modeling with energy based models. Advancesin Neural Information Processing Systems , 32, 2019.[32] P. Florence, C. Lynch, A. Zeng, O. A. Ramirez, A. Wahid, L. Downs, A. Wong, J. Lee, I. Mor-datch, and J. Tompson. Implicit behavioral cloning. In Conference on Robot Learning , pages158–168. PMLR, 2022.10[33] M. Welling and Y . W. Teh. Bayesian learning via stochastic gradient langevin dynamics. InProceedings of the 28th international conference on machine learning (ICML-11) , pages 681–688, 2011.[34] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of the IEEE conference on computer visionand pattern recognition , pages 652–660, 2017.[35] D. Hendrycks and K. Gimpel. Gaussian error linear units (gelus). arXiv preprintarXiv:1606.08415 , 2016.[36] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprintarXiv:1711.05101 , 2017.11AppendixA Framework OverviewTo learn our proposed conditional distribution of alignments as an energy-based model, we utilisevarious techniques to create the diverse dataset needed, pre-train an implicit occupancy networkto capture local geometries of objects, and create a heterogeneous graph representation of objectalignments. We discuss these techniques and their implementation details in the following sections,while the overview can be seen in Figure 7.ShapeNet Vector Neuro PointNet Encoder Combined reconstruction of the original object SE(3)-Equivariant local geometry embeddings Point cloud of an object separated into different local parts. Diverse Dataset of Consistent Alignments Between Objects DIF-Net & Alignment ...Random relative alignment between two objects Deformed objects that retain the specific relative alignment We utilise ShapeNet two-fold: 1) to learn SE(3)-Equivariant embeddings representing the local geometry of objects, and 2) to create a large diverse dataset of consistent alignments between two objects. Occupancy Net Decoder We then utilise the created dataset and trained local encoder to learn a conditional distribution of alignments as an implicit energy function whose minima represent the desired alignment between objects. Graph Transformer Energy Heterogeneous Graph Representation Graph Creation Vector Neuro PointNet Encoder Local Geometry Embeddings Inputs a b Test Alignment Demo 1 Demo N ......Demo 1 Demo N Test Alignment Edge types: “Same-Object” – sinusoidal positional encoding “Cross-Object” – sinusoidal positional encoding “Directional-Conditional” – learnable features 1 2 Figure 7: An overview of the techniques used in the design of our proposed framework.B Local Encoder NetworkThe representation of object alignment A(OA;OB)as a heterogeneous graph is a crucial step ineffectively capturing the relationship between the two objects. To achieve this, we begin by encodingthe segmented point clouds of the objects as sets of feature and position pairs. The underlyingassumption is that each feature vector can effectively represent the local geometry of a specific partof the object. By treating these feature vectors as nodes in a graph and connecting them with edgesthat represent their relative positions, we create a graph representation that enables the networkto focus on the specific parts of the objects. By adopting this graph-based approach, we are able toshift the network’s attention towards local information and individual parts of the objects, rather thanrelying solely on the global geometry of the entire objects. This localised representation facilitatesmore precise and targeted reasoning about the alignment between the objects, leading to improvedperformance in capturing the complex relationships and relative positions between the object parts.12To construct the local features, we first utilise the Furthest Point Sampling (FPS) algorithm to sampleKpoints on the surface of the point cloud ( 8in our implementation). These sampled points serve asthe centre positions pifor the subsequent calculation of local embeddings. Next, we group all thepoints in the original point cloud according to their closest centroid and re-centre them around theirrespective centroids. This grouping process results in Kdifferent point clouds, each representinga distinct part of the object. To encode these local point clouds, we employ a shared PointNetmodel. This model takes each local point cloud as input and generates a feature vector Fithatdescribes the local neighbourhood around each centroid. Our PointNet model consists of an eight-layer MLP (Multi-Layer Perceptron) with skip connections, serving as the backbone for our localencoder. To introduce SO(3)-equivariance to the features, we incorporate Vector Neurons [26]into our network. This approach, as described in the Vector Neurons paper, helps ensure that thefeatures maintain equivariance with respect to rotations in three-dimensional space. Overall, thelocal encoder comprises approximately 1.7 million trainable parameters, allowing it to capture andencode the relevant local information from the point clouds.Encoder Decoder Encoder Decoder Initial Point Cloud Set of Local Embeddings Reconstructed Point Cloud a)b)Figure 8: Examples of of our trained auto-encoder when reconstructing a pan (a), and a bowl (b).Blue point clouds represent initial point cloud observations, yellow points represent sampled cen-troids, and red and green points represent network prediction made for that point, occupied (red) andnot occupied (green).To enforce, that the local embeddings indeed encode the local geometry, we pre-train them as animplicit occupancy network [27], where a decoder is given a query point and a local feature embed-ding and is asked to determine whether a query point lies on the surface of the encoded part of theobjectD(Fi;q)![0;1]. Decoder is implemented as a PointNet Model [34] with GeLU activationfunctions [35] (without Vector Neurons).We utilise positional encoding [28] and express edge features as q =(sin(20pq);cos(20pq);:::;sin (2L1pq);cos(2L1pq)), wherepqis the position of thequery point, and Lis the number of frequencies used. In our experiments, we set L= 10 .To train the occupancy network as an auto-encoder (as presented in Figure 8), a synthetic datasetis generated, consisting of point clouds for randomly sampled ShapeNet objects and correspondinglabelled query points obtained using the PySDF library. This dataset comprises a total of 100,000samples. During the training process, two NVIDIA RTX 2080ti GPUs were utilised for compu-tational acceleration. The training duration spanned a period of approximately 3 days. We usedAdamW [36] optimiser and scheduler our learning rate using the Cosine Annealing scheduler.13C Energy Based ModelTo learn our proposed alignment distribution p(AttestjAtdemo), we employ an energy-based ap-proach and model the distributions as:p(AtestjAdemo) =E(Atest;Ademo)Z(Atest;)(3)HereZis a normalising constant. In practice, we approximate this, otherwise intractable constantusing counter-examples and minimise the negative log-likelihood of Equation 1C.1 ArchitectureWe are using heterogeneous graphs constructed using features described in Section B to representthe alignment between two objects G(fFiA;piAgKi;fFiB;piBgKi). Edges in the graph in the alignmentare represented as relative positions between nodes expressed using positional encoding as:eij= (sin(20(pjpi));cos(20(pjpi));:::;sin (2L1(pjpi));cos(2L1(pjpi)))(4)In our base model, we use L= 6. Nodes in the demonstration and test alignment graphs areconnected with direction edges equipped with learnable embeddings, effectively propagating in-formation about the demonstration alignments to the test alignment graph. Note, that we are usingheterogeneous graphs, meaning different edges (connecting nodes from the same object, target andgrasped objects, and connecting demonstration and test graphs) have different types and will beprocessed with separate learnable parameters. Finally, to make predictions based on the connectedgraphs, we add an additional type of node to the graph, which aggregates the information fromthe test alignment graph. This Node can be seen as a Class token, and each of the consideredalignments in a batch (number of counter-examples + 1) is connected to a separate Class node.Having the designed graph structure, we use graph transformer convolutions, which can be viewedas a collection of cross-attention modules. These modules facilitate message passing between nodesin the graph, taking into account the specific types of nodes and edges in our heterogeneous graphrepresentation. For a specific type of nodes and edges in the graph, the message passing and attentionmechanism can be expressed as:F0i=W1Fi+Xj2N(i)i;j(W2Fj+W6eij) ;i;j=softmax(W3Fi)T(W4Fj+W6eij)pd(5)Embedding from the Energy node (orClass token) is then processed with a small MLP to producethe predicted energy.Our base model is comprised of 4graph transformer convolutions with 4multi-head attention heads,each with a dimension of 64. Final MLP is composed of 2 layers (with dimensions 256) and GeLUactivation functions [35]. The full model contains around 5:7Mtrainable parameters.C.2 TrainingTo train our proposed energy model, we first need to create alignments of test objects used ascounter-examplesf^AjtestgNnegj . We do so by creating copies of Gtest(fFiA;piAgKi;fFiB;piBgKi),and applying SE(3)to the nodes in the graph describing the grasped object. Note that demonstra-tion alignment graphs do not need to be copied, as they are connected to the test alignment graphwith directional edges, propagating information one way.14To actually transform the nodes in the graph corresponding to the grasped object, both, the positionand the feature vectors need to be transformed. Given a transformation Tnoise , and it’s correspondingrotation matrix Rnoise we update the graph nodes as:[^pA;1]T=Tnoise[pA;1]T^FA=RnoiseFA (6)Note that this is possible because of the use of SE(3)-equivariant embeddings described in Section B.During training, we create 256different ^Atestalignments per batch to approximate Z, each createdusing a unique Tnoise . All the counter-examples as alternative graph alignments are created directlyon a GPU, facilitating an efficient training phase.To calculate a set of transformations Tnoise we use a combination of Langevin Dynamics (describedin Section C.4) and uniform sampling at different scales. We start the training with uniform sam-pling in ranges of [0:8;0:8]metres for translation and [;]radians for rotation. After Nnumberof optimisation steps ( 10Kin our implementation), we incorporate Langevin Dynamics samplingwhich we perform every 5optimisation steps. During this phase, we also reduce the uniform sam-pling range to [0:1;0:1]metres for translation and [=4;=4]radians for rotation. Althoughcreating negative samples using only Langevin Dynamics is sufficient, in practice, we found thatour described sampling strategy leads to faster convergence and more stable training for this specificapplication of energy-based models.All models were trained on a single NVIDIA GeForce 3080Ti GPU for approximately 1 day.During the training of the proposed energy model, several important tricks were employed to ensurestability, efficiency, and smoothness of the energy landscapes for effective gradient-based optimi-sation. These tricks contribute to the overall training process and facilitate the convergence of themodel. The following tricks were identified as particularly significant:•L2Regularisation : To prevent the logits from diverging towards positive or negative in-finity, a small L2regularisation term is added to the loss function. This regularisation termhelps to control the magnitude of the logits and maintain stability during training.•Spectral Normalisation : Spectral normalisation is applied to all layers of the network. Itnormalises the weights of the network using the spectral norm (the largest singular value) ofeach weight matrix. In our case, energy landscapes that were learnt without using spectralnorms were unusable for gradient-based optimisation.•L2Gradient Penalties : Gradient penalties are applied to the feature vectors of edgesconnecting grasped and target objects. This technique imposes an L2regularisation on thegradients, penalising large changes in the input space. By doing so, the energy landscapebecomes smoother and more amenable to gradient-based optimisation.•Pre-training on a Subset of the Data : When dealing with a large and diverse dataset, itis beneficial to initialise the network by pre-training it on a smaller subset of the trainingdata. This pre-training process allows the gradients to flow in regions of the loss-functionlandscape that would otherwise be relatively flat. As a result, the network can start from abetter initialisation point, accelerating the training process. In the specific case mentioned,pre-training on approximately 1,000 samples saved approximately 70% of the total trainingtime.•Dividing search space between multiple models : We train two models that are exactly thesame, but one is trained without rotations, while the other one is trained with rotations only.It reduces the search space each model needs to generalise over and prevents conflictinggradients during inference.•Mix negative sampling strategy : It allows the model to initially learn coarse energylandscape (using random sampling), which is later refined using small perturbations andLangevin sampling.•Regular checkpoint evaluation : It allows to detect training instabilities and select a stablecheckpoint with a smooth energy landscape suitable for gradient-based optimisation.Although training energy models using a contrastive learning approach can be challenging and un-stable, the discussed training techniques, together with the use of a joint graph representation, that15mitigates the need for the network to learn a highly non-linear mapping between observation andaction spaces, lead to stable and efficient training. Figure 9 shows typical training logs of our modelincluding training loss and maximum and minimum predicted energies throughout training.Figure 9: Training logs of our energy model (translation). Please note that the sudden jump in thetraining loss is due to the discussed change in the negative sampling strategy.C.3 Scaling to the Number of DemonstrationsAs discussed previously, nodes in the graph that belong to the same demonstration are denselyconnected using different types of edges. Nodes from demonstrated and test alignments are con-nected with directional edges (there are no edge connections between demonstrated alignmentsthemselves). Because of this, we can connect demonstrated alignments to any number of paral-lel test samples and make energy predictions for all of them in a single forward pass. Therefore, thenumber of nodes in the graph grows linearly with the number of demonstrations, but the number ofedges also grows linearly. More precisely, the number of edges in the graph grows as NMK,whereNis the number of demonstrations, Mis the number of parallel samples, and Kis the num-ber of nodes per demonstration. Figure 10 shows graphs depicting how the number of nodes, thenumber of edges, memory usage and time required for a single forward pass depends on the numberof demonstrations used.0 20 40 60 80 100Number of Demonstrations05001000150020002500Number of Nodes2 Parallel Sample33 Parallel Sample65 Parallel Sample0 20 40 60 80 100Number of Demonstrations0200400600800Number of Edges (x1000)2 Parallel Sample33 Parallel Sample65 Parallel Sample0 20 40 60 80 100Number of Demonstrations20406080100Forward Pass Time (ms)2 Parallel Sample33 Parallel Sample65 Parallel Sample0 20 40 60 80 100Number of Demonstrations300350400450500Memory Usage (MB)2 Parallel Sample33 Parallel Sample65 Parallel SampleFigure 10: Graphs showing how various metrics of our heterogeneous graph representation scalewith the number of demonstrations used.16C.4 Inference OptimisationAssuming a learnt, previously described, energy-based model, our goal at inference is to use it tosample from the conditional distribution p(AtestjAdemo). We can not directly sample alignmentsof objectsAtest, but we can compute SE(3)transformationT, that when applied to the graspedobject would result in an alignment between the objects that are within the distribution p(:).To solve Equation 2, we utilise an iterative gradient-based approach (Langevin Dynamics sampling).Each iteration step kin the optimisation process updates the nodes of the graph alignment represen-tation corresponding to the grasped object as:[pk+1A;1]T=2TkupdateTnoise(k)[pkA;1]T;Fk+1A=RkupdateRnoise(k)FkA (7)Here,kN (0;2k)2R6andTnoise is calculated using exponential mapping to project it toSE(3)asTnoise() = Expmap( ). In practice, to calculate Tupdate2SE(3)(andRupdate2SO(3)), we first transform the appropriate nodes in the graph using an identity transformation TI2SE(3)and calculate its gradient using back-propagation as rTIE(Atest(TIOkA;OB);Ademo)2R6. Finally,Tupdate is calculated by taking an exponential mapping of rTIE()asTkupdate =Expmap(rTIE()).17D ExperimentsD.1 Task DefinitionsWe evaluate our approach on six different tasks: 1) Grasping . The goal is to grasp different pans bythe handle, where success means the pan is lifted by the gripper. 2) Stacking . The goal is to stacktwo bowls, where success means one bowl remains inside the other bowl. 3) Sweeping . The goal isto sweep marbles into a dustpan with a brush, where success means that 2 out of the 3 marbles endup in the dustpan. 4) Hanging . The goal is to hang a cap onto a deformable stand, where successmeans the cap rests on the stand. 5) Inserting . The goal is to insert a bottle into a shoe, wheresuccess means the bottle stays upright in the shoe. 6) Pouring . The goal is to pour a marble into amug, where success means the marble ends up in the mug.D.2 Implementation Details for Real-World ExperimentsObtaining Point Clouds : to obtain point cloud observation for our real-world experiments weare using 3 calibrated RealSense D415 depth cameras. To calibrate the extrinsics, we are using astandard charuco board and calibration implementation from the open-cv library. Typical calibrationerrors that we observe are in the range of 25mm. To combine observations from differentcameras, we first projected separate point cloud observations to a common frame of reference (frameof the robot base) using calibrated extrinsics. We then concatenated the point clouds and used voxeldownsampling (voxel size of 5mm) to remove the redundant information. We first capture the pointcloud of the workspace using the three cameras (with the gripper empty) to obtain the observationof the target object. We then place the object in the robot’s gripper. The robot then moves the end-effector to the pre-defined position in which the grasped object is seen from 2 external cameras, andthe observation of the grasped object is captured. If the task does not involve the grasped object (e.g.grasping), we are using the point cloud of the gripper obtained in simulation. Typical point cloudsthat we obtained using such a procedure can be seen in Figure 11.Grasping Stacking Sweeping Hanging Inserting Pouring Figure 11:Recording Demonstrations : We represent the demonstration trajectories as a series of waypointsof alignments between two objects. To obtain these waypoints of alignments of point clouds usingkinesthetic teaching, we assume rigid grasps and utilise the forward kinematics of the robot. Whenthe grasped object is moved to the desired waypoint, we record the pose of the end-effector and applya transformation to the point cloud of the grasped object, such that it is transformed to the currentpose. This mitigates the need to re-capture point cloud observations at each waypoint. Finally, all thepoint clouds recorded are transformed in the robot’s end-effector frame. In this way, the predictedtransformation can be directly applied to the robot’s end-effector, reaching the predicted waypoint.Executing the Predicted Trajectory : In this work, we model the trajectories as a series of sparsewaypoints that need to be reached in sequence. Therefore, we use a standard position controller thatinterpolates the path between the waypoints and reaches them with constant velocity.18D.3 Discussion on Failure CasesThe failure cases of our method were mainly observed when the objects used have significantlydifferent geometry from the ones to provide demonstrations. This is evident from the performanceof our method on the Sweeping Task , where the brushes used to provide demonstrations are signif-icantly different from the ones used during testing (see Figure 12 top). The shape augmentationmodel we used to create the dataset of consistent alignments can not create such differences inaugmented objects, making them significantly out of distribution for our trained model. Figure 12bottom shows types of deformations that could be created from the demonstration objects. Note thatsuch failure cases could be mitigated by using more powerful shape augmentation models comingfrom the fastly moving 3D vision community.Objects used to provide demonstrations Objects used during inference Deformations Deformations Figure 12: Top - point cloud observations of brushes used for the Sweeping Task , Bottom - types ofshape deformations produced by DIF-Net [2] from the objects used for demonstrations.D.4 Exploring Demonstration DiversityTo explore the importance of diversity in the demonstrations, we created 3different evaluationdatasets, each with increasing diversity of objects. We vary the diversity of the objects by increasingthe random scaling range, as well as the scale of the latent vector , which controls the amount ofdeformations applied by the DIF-Net model [2]. Examples of point cloud samples from the createddatasets can be seen in Figure 13.19Low Diversity Medium Diversity High Diversity Context TestFigure 13: Random samples from the 3 different datasets used for the data diversity experiment.Green point cloud represent object A, while red point cloud represent object B.1 2 5 10 15Number of Demonstrations51015202530Error, degLow DiversityMedium DiversityHigh DiversityFigure 14: Rotational error based on the number of demonstrations for 3 different sets of diversities.20 |
G_FEL3OkiR | Human-In-The-Loop Task and Motion Planningfor Imitation LearningAjay Mandlekar1∗, Caelan Garrett1∗, Danfei Xu1,2, Dieter Fox1∗equal contribution,1NVIDIA,2Georgia Institute of TechnologyAbstract: Imitation learning from human demonstrations can teach robots com-plex manipulation skills, but is time-consuming and labor intensive. In contrast,Task and Motion Planning (TAMP) systems are automated and excel at solvinglong-horizon tasks, but they are difficult to apply to contact-rich tasks. In this pa-per, we present Human-in-the-Loop Task and Motion Planning (HITL-TAMP), anovel system that leverages the benefits of both approaches. The system employsa TAMP-gated control mechanism, which selectively gives and takes control toand from a human teleoperator. This enables the human teleoperator to man-age a fleet of robots, maximizing data collection efficiency. The collected humandata is then combined with an imitation learning framework to train a TAMP-gated policy, leading to superior performance compared to training on full taskdemonstrations. We compared HITL-TAMP to a conventional teleoperation sys-tem — users gathered more than 3x the number of demos given the same timebudget. Furthermore, proficient agents (75%+ success) could be trained fromjust 10 minutes of non-expert teleoperation data. Finally, we collected 2.1K de-mos with HITL-TAMP across 12 contact-rich, long-horizon tasks and show thatthe system often produces near-perfect agents. Videos and additional results athttps://hitltamp.github.io .Keywords: Imitation Learning, Task and Motion Planning, Teleoperation1 IntroductionLearning from human demonstrations has emerged as a promising way to teach robots complex ma-nipulation skills [1, 2]. However, scaling up this paradigm to real-world long-horizon tasks has beendifficult — providing long manipulation demonstrations is time-consuming and labor intensive [3].At the same time, not all parts of a task are equally challenging. For example, significant portionsof complex manipulation tasks such as part assembly or making a cup of coffee are free-space mo-tion and object transportation, which can be readily automated by non-learning approaches suchas motion planning. However, planning methods generally require accurate dynamics models [4]and precise perception, which are often unavailable, limiting their effectiveness at contact-rich andlow-tolerance manipulation. In this context, our work aims at solving real-world long-horizon ma-nipulation tasks by combining the benefits of learning and planning approaches.Our method focuses on augmenting Task and Motion Planning (TAMP) systems, which have beenshown to be remarkable at solving long-horizon problems [5]. TAMP methods can plan behaviorfor a wide range of multi-step manipulation tasks by searching over valid combinations of a smallnumber of primitive skills. Traditionally, each skill is hand-engineered; however, certain skills, suchas closing a spring-loaded lid or inserting a rod into a hole, are prohibitively difficult to model in aproductive manner. Instead, we use a combination of human teleoperation and closed-loop learningto implement just these select skills, keeping the rest automated. These skills use human teleopera-tion at data collection time and a policy trained from the data at deployment time. Integrating TAMP7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.systems and human teleoperation poses key technical challenges — special care must be taken toenable seamless handoff between them to ensure efficient use of human time.Task and Motion PlanningTask PlanIn(pod, machine)∧Close(lid)a!a"a#π!Motor Skills...π$π#...Motion plan / motor primitivesPolicies trained from human dataTask GoalstartgoalFigure 1: Overview. HITL-TAMP decomposes atask (making coffee) into planning-based (TAMP) andlearning-based (human imitation) segments.To address these challenges, we introduceHuman-in-the-Loop Task and Motion Plan-ning (HITL-TAMP), a system that symbioti-cally combines TAMP with teleoperation. Thesystem collects demonstrations by employinga TAMP-gated control mechanism — it tradesoff control between a TAMP system and a hu-man teleoperator, who takes over to fill in gapsthat TAMP delegates. Critically, human opera-tors only need to engage at selected steps of atask plan when prompted by the TAMP system,meaning that they can manage a fleet of robotsby asynchronously engaging with one demon-stration session at a time while a TAMP system controls the rest of the fleet.By soliciting human demonstrations only when needed, and allowing for a human to participatein multiple parallel sessions, our system greatly increases the throughput of data collection whilelowering the effort needed to collect large datasets on long-horizon, contact-rich tasks. We combineour data collection system with an imitation learning framework that trains a TAMP-gated policy (asillustrated in Fig. 1) on the collected human data. We show that this leads to superior performancecompared to collecting human demonstrations on the entire task, in terms of the amount of data andtime needed for a human to teach a task to the robot, and the success rate of learned policies.The main contributions of this paper are:•We develop HITL-TAMP, an efficient data collection system for long-horizon manipulation tasksthat synergistically combines and trades off control between a TAMP system and a human operator.•HITL-TAMP contains novel components including (1) a mechanism that allows TAMP to learnplanning conditions from a small number of demonstrations and (2) a queuing system that allows ademonstrator to manage a fleet of parallel data collection sessions.•We conduct a study (15 users) to compare HITL-TAMP with a conventional teleoperation system.Users collected over 3x more demos with our system given the same time budget. Proficient agents(over 75% success) could be trained from just 10 minutes of non-expert teleoperation data.•We collected 2.1K demos with HITL-TAMP across 12 contact-rich and long-horizon tasks, includ-ing real-world coffee preparation, and show that HITL-TAMP often produces near-perfect agents.2 PreliminariesSummary of Related Work. Several works have shown the value in learning robot manipulationwith human demonstrations [6, 1, 2, 7, 8, 9, 10, 11, 8], in developing automatic control hand-offsbetween a human supervisor and an automated system for more effective data collection [12, 13, 14,15, 16], and in combining learned and predefined skills [17, 18, 19, 20, 21]. Prior TAMP [5, 22,4, 23] works have also integrated learning-based components [24, 25, 26, 27, 28, 29, 30, 31, 32] tomake less assumptions on prior knowledge. See Appendix D for full related work.Problem Statement. We consider a robot acting in a discrete-time Markov Decision Process(MDP) ⟨X,U,T(x′|x, u),R(x),P0⟩defined by state space X, action space U, transition dis-tribution T, reward function R, and initial state distribution P0. We assume we are given an offlinedataset of Npartial demonstration trajectories (collected via our HITL-TAMP system, see Sec. 3.3)D={⟨xi0, ui0⟩,⟨xi1, ui1⟩, ..., xiTi}Ni=1. We train policies πwith Behavioral Cloning [33] using theobjective arg min θE(x,u)∈D||πθ(x)−u||2(details in Appendix K).We consider a TAMP policy πt(u|x)for controlling the robot. It plans a sequence of actions thatwill be tracked using a feedback controller. We use the PDDLStream [23] planning framework, a2logic-based action language that supports planning with continuous values, to model our TAMP do-main. States and actions are described using predicates , Boolean functions, which can have discreteand continuous parameters. A predicate paired with values for its parameters is called a literal . OurTAMP domain uses the following parameters: ois an object, g∈SE(3) is a 6-DoF object grasppose relative to the gripper, p∈SE(3) is an object placement pose, q∈Rdis a robot configurationwithdDoFs, and τis a robot trajectory comprised of a sequence of robot configurations.The planning state sis a set of true literals for fluent predicates, predicates who’s truth value canchange over time. We define the following fluent predicates: AtPose (o, p)is true when object oisplaced at placement p;AtGrasp (o, g)is true when object ois grasped using grasp g;AtConf (q)is true when the robot is at configuration q;Empty ()is true when the robot’s end effector is empty;Attached (o, o′)is true when object ois attached to object o′;We use the Tool Hang task as a running example (see Fig. 5), where the robot must insert aframe into a stand and then hang a tool on the frame . The set of goal system states X∗is ex-pressed as a logical formula over literals. Let s0be the initial state s0andGbe the goal formula:s0={AtPose (frame ,pf0),AtPose (tool,pt0),AtPose (stand ,ps0),AtConf (q0),Empty ()}.G=Attached (frame ,stand )∧Attached (tool,frame )∧Empty ().Planning actions aare represented using action schemata. An action schema is defined by a 1) name,2) list of parameters, 3) list of static (non-fluent) literal constraints (con) that valid parameter valuessatisfy, 4) list of fluent literal preconditions (pre) that must hold to correctly execute the action, and4) list of fluent literal effects (eff) that specify changes to state. The move action advances the robotfrom configuration q1toq2via trajectory τ. The constraint Motion (q1, τ, q 2)is satisfied if q1andq2are the start and end of τ. In the pick action, the constraint Grasp (o, g)holds if gis a valid graspfor object o, and the constraint Pose (o, p)holds if pis a valid placement for object o. The explicitconstraint f(q)∗g=prepresents kinematics, namely that forward kinematics f:Rd→SE(3) forthe robot’s gripper from configuration qmultiplied with grasp gproduces pose p.move (q1, τ, q 2)con: [Motion (q1, τ, q 2)]pre: [AtConf (q1), Safe (τ)]eff: [AtConf (q2),¬AtConf (q1)]pick (o, g, p, q )con: [Grasp (o, g), Pose (o, p),[f(q)∗g=p]]pre: [AtPose (o, p), Empty (), AtConf (q)]eff: [AtGrasp (o, g),¬AtPose (o, p),¬Empty ()]The limitations of the TAMP system are that, although it can readily observe the robot state, it doesnot have the ability to precisely estimate the environment and productively react to changes in it inreal-time. Thus, it’s advantageous to teleoperate skills that require 1) contact-rich interaction thatis difficult to accurately model and 2) precision greater than that which the perception system candeliver. An example of 1) is the insertion phase of Tool Hang , which typically requires contactingthe walls of the hole to align the frame , and an example of 2) is the hanging phase of Tool Hang ,which requires precisely aligning the hole of the tool with the resting frame .3 Integrating Human Teleoperation and TAMPTo make TAMP and conventional human teleoperation systems compatible, we describe crucialcomponents that allow for seamless handoff between TAMP and a human operator. These include1) a novel constraint learning mechanism that allows TAMP to plan to states that enable subsequenthuman teleoperation (Sec. 3.2) and 2) the core TAMP-gated teleoperation algorithm (Sec. 3.3).3.1 Teleoperation Action ModelingTo account for human teleoperation during planning, we need an approximate model of the teleop-eration process. We build on the high-level modeling approach of Wang et al. [25] by specifying anaction schema for each skill identifying which constraints can be modeled using classical techniques.Then, we extract the remaining constraints from a handful of teleoperation trajectories. Continuingour running example, we teleoperate the frame insertion and toolhang in the Tool Hang task.3Theattach action models any skill that involves attaching one movable object to another object,for example, by placing, inserting, or hanging. Its parameters are a held object o, the current graspgforo, the corresponding current pose pofo, the current robot configuration q, the subsequentpose bpofo, the subsequent robot configuration bq, and the object to be attached to o′. This action isstochastic as the human teleoperator ”chooses” the resulting pose bpand configuration bq(indicatedbyb□), which modeled by the constraint HumanAttach (o,bp,bq, o′). Rather than explicitly modelthis constraint, we take an optimistic determinization of the outcome by assuming that the humanproduces a satisficing bp,bqpair, without committing to specific numeric values.attach (o, g, p, q, bp,bq, o′)con: [AttachGrasp (o, g), PreAttach (o, p, o′),[f(q)∗g=p],GoodAttach (o,bp, o′), HumanAttach (o,bp,bq, o′)]pre: [AtGrasp (o, g), AtConf (q)]eff: [AtPose (o,bp), Empty (), Attached (o, o′), AtConf (bq),¬AtGrasp (o, g),¬AtConf (q)]The key constraint is GoodAttach (o,bp, o′), which is true if object oat pose psatisfies the ground-truth goal attachment condition in Gwith object o′. The human teleoperator is tasked with reachinga pose bpthat satisfies this constraint, which is a postcondition of the action. The goal of modellearning is to represent the preconditions (Sec. 3.2) that facilitate this in a generative fashion.3.2 Constraint LearningPreAttach(frame, stand)≤ 5 Partial (Skill)Human DemonstrationsPreAttach(tool, frame)Learned Constraint DistributionsFigure 2: Constraint learning. Example of learned at-tach conditions for the frame (left) and toolfrom a hand-ful of demonstrations for the Tool Hang task.To complete the action model, we learn theAttachGrasp andPreAttach constraints,which involve parameters in attach ’s precon-ditions. We bootstrap these constraint modelsfrom a few ( ∼3in our setting) human demon-strations. These demonstrations only need toshowcase the involved action. Through com-positionality, these actions can be deployed inmany new tasks without the need for retraining.In this work, because the set of objects is fixed, the constraints do not need to generalize across ob-jects so we simply use uniform distributions over pose datasets conditioned on task and objects. Insettings where there are novel objects at test time, we could instead estimate these affordances acrossobjects directly from observations [25, 34, 35] using more complicated (deep) generative models.We define PreAttach (o, p, o′)to be true if pis a pose for object oimmediately prior to the humanachieving GoodAttach (o,bp, o′). For each human demonstration, we start at the first state whereGoodAttach is satisfied and then search backward in time for the first state where (1) the robotis holding object oand (2) objects oando′are at least δcentimeters apart. This minimum distanceconstraint ensures that oando′are not in contact in a manner that is spatially consistent and robust toperception and control error. We log the relative pose pbetween oando′as a data point and continueiterating over human demonstrations to populate a dataset Poo′={p|PreAttach (o, p, o′)}.Similarly, we define AttachGrasp (o, g)to be true if gis a grasp for object oallows for the humanachieving GoodAttach (o,bp, o′). Not all object grasps enable the human to satisfy the target condi-tion, for example, a frame grasp on the tip that needs to be inserted. Similar to PreAttach , for eachdemonstration we log the relative pose between the robot end effector and object oat the first pre-contact state before satisfying GoodAttach , producing dataset Go={g|AttachGrasp (o, g)}.3.3 TAMP-Gated TeleoperationWe now describe TAMP-gated teleoperation, where a TAMP system decides when to execute por-tions of a task, and when a human operator should complete a portion (full details in Appendix J).Each teleoperation episode consists of one or more handoffs where the TAMP system prompts ahuman operator to control a portion of a task, or where the TAMP system takes control back after itdetermines that the human has completed their segment.4R1R2R3R4R5R6...FIFO QueueDequeue + Control SwitchTeleoperationTAMPTAMPTAMPHuman-in-the-loop TAMP Data Collection SessionsFigure 3: Queueing system. HITL-TAMP’s queueing system allows a human teleoperator (bottom left) tomanage a fleet of asynchronously-running data collection sessions (R1-R6).Every task is defined by a goal formula G. On each TAMP iteration, it observes the current state s.If it satisfies Gthe episode terminates, otherwise, the TAMP system solves for a plan ⃗ afrom currentstatesto the goal G. TAMP subsequently issues joint position commands to carry out planned mo-tions until reaching an action arequiring the human. Next, control switches into teleoperation mode,where the human has full 6-DoF control of the end effector. We use a smartphone interface similarto prior teleoperation systems [36, 37, 10]. The robot end effector is controlled using an OperationalSpace Controller [38]. The TAMP system monitors whether the state satisfies the planned actionpostconditions a.effects . Once satisfied, control switches back to the TAMP system, which replans.4 Scaling Data Collection for LearningIncreasing Data Throughput with a Queueing System. Since the TAMP system only requireshuman assistance in small parts of an episode, a human operator has the opportunity to managemultiple robots and data collection sessions simultaneously. To this end, we propose a novel queue-ing system (Fig. 3) allowing each operator to interact with a fleet of robots. We implement thisby using several ( Nrobot)robot processes , a single human process , and a queue (more analysis inAppendix I). Each robot process runs asynchronously, and spends its time in 1 of 3 modes — (1)being controlled by the TAMP system, (2) waiting for human control, or (3) being controlled by thehuman. This allows the TAMP system to operate multiple robots in parallel. When the TAMP sys-tem wants to prompt the human for control, it enqueues the environment into the shared queue. Thehuman process communicates with the human teleoperation device and sends control commands toone robot process at a time. When the human completes a segment, TAMP resumes control of therobot, and the human process dequeues the next session from the queue.TAMP-Gated Policy Deployment. HITL-TAMP results in demonstrations that consist of TAMP-controlled parts and human-controlled parts — we train a policy with Behavioral Cloning [33] onthe human portions (details in Appendix K). To deploy the learned agent, we use a TAMP-gatedcontrol loop that is identical to the handoff logic in Sec. 3.3, using the policy instead of the human.5 Experiment SetupTasks. We chose evaluation tasks that are contact-rich andlong-horizon , to validate that HITL-TAMP indeed combines the benefits of the two paradigms (see Fig. 4 and Fig. 5). We furtherevaluated HITL-TAMP on variants of tasks where objects are initialized in broad regions of theworkspace, a difficult setting for imitation learning systems in the past. Full details in Appendix E.Pilot User Study. We conducted a pilot user study with 15 participants to compare our system(HITL-TAMP) to a conventional teleoperation system [36], where task demonstrations were col-lected without TAMP involvement. Each participant performed task demonstrations on 3 tasks(Coffee ,Square (Broad) , and Three Piece Assembly (Broad) ) for 10 minutes on each system,5(a) Square (b) Coffee (c) 3 Pc. Assembly (d) Tool Hang (e) Coffee Prep.Figure 4: Tasks. We use HITL-TAMP to collect demonstrations for contact-rich, long-horizon tasks.Task Demos (avg-user) Demos (avg-novice) Demos (all) SR (avg-user) SR (avg-novice) SR (all)Coffee (C) 11.2 7 .2 168 .0 24 .4 15 .0 76 .0Coffee (HT) 28.7 25 .2 431 .0 90 .7 90 .0 100 .0Square Broad (C) 11.1 5 .2 166 .0 1 .2 0 .0 20 .0Square Broad (HT) 49.8 41 .8 747 .0 80 .0 77 .5 98 .0Three Piece Assembly Broad (C) 7.8 7.0 117.0 0 .0 0 .0 0 .0Three Piece Assembly Broad (HT) 15.1 8 .0 227 .0 27 .7 17 .5 66 .0Table 1: User Study Data Collection and Policy Learning Results. We report the number of demos collectedaveraged across users (avg-user), averaged across novice users (avg-novice), and summed across all users (all).We also report the success rate of policies trained on per-user data (avg-user: averaged across all users, and avg-novice: averaged across novice users), and trained on all user data (all). Users collected more demonstrationsusing HITL-TAMP (HT) than the conventional system (C), and policy performance was vastly greater as well.totaling 60 minutes of data collection across the 3 tasks and 2 systems. Participants filled out apost-study survey to rank their experience with both systems. Each participant’s number of success-ful demonstrations was recorded to evaluate the data throughput of each system, and agents weretrained on each participant’s demonstrations and across all participants’ demonstrations (Sec. 6.1).6 Experiment ResultsWe (1) present user study results to highlight HITL-TAMP’s data collection efficiency (Sec. 6.1),(2) compare trained HITL-TAMP agents to policies trained from full task demonstrations (Sec. 6.2),and (3) deploy HITL-TAMP in the real world without precise perception (Sec. 6.3).6.1 System Evaluation: User StudyWe show that (1) HITL-TAMP allows participants to collect demonstrations much faster thanconventional teleoperation, (2) we can train performant policies using data collected from userswith varying system proficiency, (3) HITL-TAMP enables novice operators to collect high-qualitydemonstration data, and (4) HITL-TAMP requires less user effort than conventional teleoperation.HITL-TAMP enables users to collect task demonstrations at a much higher rate than a con-ventional teleoperation system. As Table 1 shows, collectively, our 15 users gathered 2.5x moredemonstrations with HITL-TAMP when compared to the conventional system on the Coffee task(431 vs. 168), 4.5x more on Square Broad (747 vs. 166), and nearly 2x more on Three Piece As-sembly Broad (227 vs. 117). The high collection efficacy of HITL-TAMP was also reflected on aper-user basis — users averaged 28.7 demos on Coffee (vs. 11.2), 49.8 demos on Square Broad (vs.11.1), and 15.1 demos on Three Piece Assembly Broad (vs. 7.8), during their 10-minute sessions.HITL-TAMP enables performant policies to be trained from minutes of data. We used each per-son’s 10-minute demonstrations to train a policy for each (user-task) pair with behavioral cloning.Agents trained on HITL-TAMP data vastly outperformed those trained from the conventional tele-operation data (Table 1) — agents achieved an average success rate of 90.7% on Coffee (vs. 24.4%),80.0% on Square Broad (vs. 1.2%), and 27.7% on Three Piece Assembly Broad (vs. 0.0%).HITL-TAMP enables training proficient agents from multi-user data. Prior work [39, 1] notedthat imitation learning from multi-user demonstrations can be difficult. However, we found agentstrained on the full set of multi-user HITL-TAMP data achieve high success rates (100.0%, 98.0%,and 66.0% on Coffee, Square Broad, and Three Piece Assembly Broad, respectively) compared to6TAMPHumanTAMPTAMPTAMPTAMPHumanHumanHumanHumanTask Success RateStack Three 62.0Coffee 74.0Coffee Broad 66.0Tool Hang 64.0Figure 5: (left) Real Tasks. Coffee ( top), a version where the machine can be on either side (Coffee Broad,not shown), Stack Three ( middle ), and Tool Hang (bottom ). We show TAMP in orange and the human in blue.(right) Real World Policy Performance. We collected 100 demonstrations on Stack Three, Coffee, and CoffeeBroad and 50 demonstrations on Tool Hang with HITL-TAMP and report the policy performance in this table.Task Time (min) SR (low-dim) SR (image)Square 13.5 100 .0±0.0 100 .0±0.0Square Broad 14.0 100 .0±0.0 100 .0±0.0Coffee 22.6 100 .0±0.0 100 .0±0.0Coffee Broad 28.8 99 .3±0.9 96 .7±0.9Tool Hang 48.0 80 .7±1.9 78 .7±0.9Tool Hang Broad 51.5 49 .3±1.9 40 .7±0.9Three Piece Assembly 30.0 100 .0±0.0 100 .0±0.0Three Piece Assembly Broad 34.9 84 .7±4.1 82 .0±1.6Coffee Preparation 78.4 96 .0±3.3 100 .0±0.0Task Time (min) SR (im) TAMP-gated SR (im)Square (C) [1] 25.0 82 .0±0.0 100.0±0.0Square (HT) 13.5 100 .0±0.0 100 .0±0.0Square Broad (C) 48.0 15 .3±0.0 94 .7±0.9Square Broad (HT) 14.0 100 .0±0.0 100 .0±0.0Three Piece Assembly (C) 60.0 75 .3±0.0 77 .3±7.7Three Piece Assembly (HT) 30.0 100 .0±0.0 100 .0±0.0Tool Hang (C) [1] 80.0 67 .3±0.0 82.0±2.8Tool Hang (HT) 48.0 78 .7±0.9 78 .7±0.9Figure 6: (left) Results on HITL-TAMP datasets. We collected 200 demonstrations on each task with HITL-TAMP and trained low-dim and visuomotor agents TAMP-gated agents on each dataset. (right) Comparisonto conventional teleoperation datasets. We trained both normal and TAMP-gated policies using conventionalteleoperation (C) and compared them to HITL-TAMP (HT). Surprisingly, TAMP-gating makes policies trainedon the data comparable to HITL-TAMP data, but data collection still involves significantly higher operator time.those trained on the full set of conventional teleoperation data (76.0%, 20.0%, 0.0%) (see Table 1).In fact, the worst per-user HITL-TAMP policy (10-minutes of data) outperformed the policy trainedon the full set of conventional teleoperation data (150 minutes) on both Square Broad (56.0% vs.20.0%) and Three Piece Assembly Broad (14.0% vs. 0.0%).HITL-TAMP enables non-experts to demonstrate tasks efficiently. 4 of the 15 users in ourstudy had no experience with teleoperation. Table 1 shows we found that they were able to collectfar more data on average with HITL-TAMP (more than 3x on Coffee, more than 8x on SquareBroad) and policies trained on their HITL-TAMP data achieved significantly higher success overthe conventional system — 90.0% (vs. 15.0%) on Coffee, 77.5% (vs. 0.0%) on Square Broad, and17.5% (vs. 0.0%) on Three Piece Assembly Broad.HITL-TAMP results in a lower perceived workload compared to the conventional teleopera-tion system. Each participant completed a NASA-TLX survey [40] to rank their perceived workloadfor each system across 6-categories (100-point scale, increments of 5). Users found HITL-TAMP torequire less mental demand (36% vs. 74%), less physical demand (29.7% vs. 63.7%), and less tem-poral demand (28.3% vs. 53.7%), while enabling higher overall performance (83.7% vs. 59.7%),with lower effort (29.3% vs. 75.7%) and lower frustration (30.0% vs. 65.0%).6.2 Learning ResultsWe collect datasets with HITL-TAMP across 9 tasks (see Sec. 5) and show that highly capablepolicies can be trained from this data. The results compare favorably to training on equal amountsof demonstrations from a conventional teleoperation system.HITL-TAMP is broadly applicable to a wide range of contact-rich and long-horizon tasks.Using HITL-TAMP, we had a single human operator collect 200 demonstrations on each of ourtasks. We then trained agents from this data on two observation spaces — low-dim observations,where agents directly observe the poses of relevant objects, and image observations, where agentsobserve a front-view RGB image and wrist RGB image (as in [1]). Table 6 shows that across bothobservation spaces, HITL-TAMP trains near-perfect agents on several tasks (Square, Coffee, ThreePiece Assembly), including broad tasks with a wide distribution of object initialization (SquareBroad, Coffee Broad, Three Piece Assembly Broad). HITL-TAMP also achieves high performanceon the Tool Hang task (80.7% low-dim, 78.7% image), which is the hardest task in the robomimic7benchmark [1]. It is also able to train performant agents (49.3% low-dim, 40.7% image) on abroad version of the task (Tool Hang Broad). Finally, HITL-TAMP trains near-perfect agents (96%)on the Coffee Preparation task, which consists of several stages (4 TAMP segments and 4 policysegments) involving low-tolerance mug placement, drawer grasping and opening, lid opening, andpod insertion and lid closing.HITL-TAMP compares favorably to conventional teleoperation systems in terms of operatortime and policy learning. Even when an equal number of task demonstrations are used, learnedpolicies from HITL-TAMP still outperform those from conventional teleoperation. We run our com-parison on 4 tasks — Square, Square Broad, Three Piece Assembly, and Tool Hang, where eachtask has 200 HITL-TAMP demos collected and 200 conventional system demos. As Table 6 shows,HITL-TAMP enabled collecting 200 demonstrations on each task in much shorter periods of time(additional analysis in Appendix F). Furthermore, agents trained on HITL-TAMP data outperformagents trained on conventional data (with the largest gap being 100.0% vs. 15.3% on Square Broad).TAMP-gated control is a crucial component to train proficient policies. We took the 200 demon-stration datasets collected via conventional teleoperation, trained the agents as normal, but deployedthem with TAMP-gated control during policy evaluation. This dramatically increases their successrates and gives comparable results to HITL-TAMP data (see Table 6). This shows that datasets con-sisting of entire human demonstration trajectories are compatible with TAMP-gated control. How-ever, they remain time-consuming to collect, and HITL-TAMP greatly reduces the time needed.6.3 Real Robot ValidationWe apply HITL-TAMP to a physical robot setup with a robotic arm, a front-view camera, and awrist-mounted camera. The only significant change from simulation is the need for perception toobtain pose estimates of the objects to populate the TAMP state. We do not assume any capabilityto track object poses in real-time. Instead, we allow the human to demonstrate (and the policy toimitate) behaviors from partial observations (RGB cameras). We collected 100 demonstrations foreach of 3 tasks — Stack Three, Coffee, and Coffee Broad, and 50 demonstrations on Tool Hang,and report policy learning results across 50 evaluations for each task (25 for Tool Hang) (see Fig. 5).Our TAMP-gated agent achieves 62% on Stack Three, 74% on Coffee, 66% on Coffee Broad (72%with the machine on the right side of the table, and 60% with the machine on the left side), and 64%on Tool Hang (as opposed to the 3% from 200 human demonstrations in prior work [1]) .7 LimitationsSee Appendix C for full limitations. We assume tasks can be described in PDDLStream and thathuman teleoperators can demonstrate them. The tasks in this work focus on tabletop domains withlimited object variety — future work could scale HITL-TAMP to more diverse settings. Currently,HITL-TAMP requires prior information (at a high-level) on which task portions will be difficultfor TAMP. We also assume access to coarse object models and approximate pose estimation toconduct TAMP segments in the real world. Future work could relax these assumptions by integratingperception uncertainty estimates, and extending TAMP to not require object models [35].8 ConclusionWe presented a new approach to teach robots complex manipulation skills through a hybrid strategyof automated planning and human control. Our system, HITL-TAMP, collects human demonstra-tions using a TAMP-gated control mechanism and learns preimage models of human skills. Thisallows for a human to efficiently supervise a team of worker robots asynchronously. The combi-nation of TAMP and teleoperation in HITL-TAMP results in improved data collection and policylearning efficiency compared to collecting human demonstrations on the entire task.8AcknowledgmentsThis work was made possible due to the help and support of Sandeep Desai (robot hardware), Ravin-der Singh (IT), Alperen Degirmenci (compute cluster), Yashraj Narang (valuable discussions), An-ima Anandkumar (access to robot hardware), Yifeng Zhu (robot control framework [41]), and ShuoCheng (drawer design used in Coffee Preparation task). We also thank all of the participants of ouruser study for contributing their valuable time and feedback on their experience.References[1] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In Conference on Robot Learning (CoRL) , 2021.[2] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[3] H. Ravichandar, A. S. Polydoros, S. Chernova, and A. Billard. Recent advances in robotlearning from demonstration. Annual review of control, robotics, and autonomous systems , 3:297–330, 2020.[4] M. A. Toussaint, K. R. Allen, K. A. Smith, and J. B. Tenenbaum. Differentiable physics andstable modes for tool-use and manipulation planning. 2018.[5] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual review of control, robotics, and autonomoussystems , 4:265–293, 2021.[6] T. Zhang, Z. McCarthy, O. Jow, D. Lee, K. Goldberg, and P. Abbeel. Deep imitationlearning for complex manipulation tasks from virtual reality teleoperation. arXiv preprintarXiv:1710.04615 , 2017.[7] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2022.[8] C. Lynch, M. Khansari, T. Xiao, V . Kumar, J. Tompson, S. Levine, and P. Sermanet. Learninglatent plans from play. In Conference on robot learning , pages 1113–1132. PMLR, 2020.[9] C. Lynch, A. Wahid, J. Tompson, T. Ding, J. Betker, R. Baruch, T. Armstrong, and P. Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[10] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, S. Savarese, and L. Fei-Fei. Learning to general-ize across long-horizon tasks from human demonstrations. arXiv preprint arXiv:2003.06085 ,2020.[11] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[12] R. Hoque, A. Balakrishna, C. Putterman, M. Luo, D. S. Brown, D. Seita, B. Thananjeyan,E. Novoseller, and K. Goldberg. Lazydagger: Reducing context switching in interactive im-itation learning. In 2021 IEEE 17th International Conference on Automation Science andEngineering (CASE) , pages 502–509. IEEE, 2021.[13] R. Hoque, L. Y . Chen, S. Sharma, K. Dharmarajan, B. Thananjeyan, P. Abbeel, and K. Gold-berg. Fleet-dagger: Interactive robot fleet learning with scalable human supervision. In Con-ference on Robot Learning , pages 368–380. PMLR, 2023.9[14] J. Zhang and K. Cho. Query-efficient imitation learning for end-to-end autonomous driving.arXiv preprint arXiv:1605.06450 , 2016.[15] R. Hoque, A. Balakrishna, E. Novoseller, A. Wilcox, D. S. Brown, and K. Goldberg.Thriftydagger: Budget-aware novelty and risk gating for interactive imitation learning. arXivpreprint arXiv:2109.08273 , 2021.[16] S. Dass, K. Pertsch, H. Zhang, Y . Lee, J. J. Lim, and S. Nikolaidis. Pato: Policy assistedteleoperation for scalable robot data collection. arXiv preprint arXiv:2212.04708 , 2022.[17] T. Silver, K. Allen, J. Tenenbaum, and L. Kaelbling. Residual policy learning. arXiv preprintarXiv:1812.06298 , 2018.[18] T. Johannink, S. Bahl, A. Nair, J. Luo, A. Kumar, M. Loskyll, J. A. Ojea, E. Solowjow, andS. Levine. Residual reinforcement learning for robot control. In 2019 International Conferenceon Robotics and Automation (ICRA) , pages 6023–6029. IEEE, 2019.[19] A. Kurenkov, A. Mandlekar, R. Martin-Martin, S. Savarese, and A. Garg. Ac-teach: A bayesianactor-critic method for policy learning with an ensemble of suboptimal teachers. arXiv preprintarXiv:1909.04121 , 2019.[20] O. Mees, J. Borja-Diaz, and W. Burgard. Grounding language with visual affordances overunstructured data. arXiv preprint arXiv:2210.01911 , 2022.[21] E. Valassakis, N. Di Palo, and E. Johns. Coarse-to-fine for sim-to-real: Sub-millimetre pre-cision across wide task spaces. In 2021 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 5989–5996. IEEE, 2021.[22] L. P. Kaelbling and T. Lozano-P ́erez. Hierarchical task and motion planning in the now. InICRA , 2011.[23] C. R. Garrett, T. Lozano-P ́erez, and L. P. Kaelbling. Pddlstream: Integrating symbolic plannersand blackbox samplers via optimistic adaptive planning. In Proceedings of the InternationalConference on Automated Planning and Scheduling , volume 30, pages 440–448, 2020.[24] G. Konidaris, L. P. Kaelbling, and T. Lozano-Perez. From skills to symbols: Learning symbolicrepresentations for abstract high-level planning. Journal of Artificial Intelligence Research , 61:215–289, 2018.[25] Z. Wang, C. R. Garrett, L. P. Kaelbling, and T. Lozano-P ́erez. Learning compositional modelsof robot skills for task and motion planning. The International Journal of Robotics Research ,40(6-7):866–894, 2021.[26] J. Liang, M. Sharma, A. LaGrassa, S. Vats, S. Saxena, and O. Kroemer. Search-based taskplanning with learned skill effect models for lifelong robotic manipulation. In 2022 Interna-tional Conference on Robotics and Automation (ICRA) , pages 6351–6357. IEEE, 2022.[27] H. M. Pasula, L. S. Zettlemoyer, and L. P. Kaelbling. Learning symbolic models of stochasticdomains. Journal of Artificial Intelligence Research , 29:309–352, 2007.[28] T. Silver, R. Chitnis, J. Tenenbaum, L. P. Kaelbling, and T. Lozano-P ́erez. Learning sym-bolic operators for task and motion planning. In 2021 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 3182–3189. IEEE, 2021.[29] R. Chitnis, D. Hadfield-Menell, A. Gupta, S. Srivastava, E. Groshev, C. Lin, and P. Abbeel.Guided search for task and motion plans using learned heuristics. In ICRA . IEEE, 2016.[30] B. Kim, L. Shimanuki, L. P. Kaelbling, and T. Lozano-P ́erez. Representation, learning, andplanning algorithms for geometric task and motion planning. IJRR , 41(2), 2022.10[31] S. Cheng and D. Xu. Guided skill learning and abstraction for long-horizon manipulation.arXiv preprint arXiv:2210.12631 , 2022.[32] T. Silver, A. Athalye, J. B. Tenenbaum, T. Lozano-Perez, and L. P. Kaelbling. Learning neuro-symbolic skills for bilevel planning. arXiv preprint arXiv:2206.10680 , 2022.[33] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances inneural information processing systems , pages 305–313, 1989.[34] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-graspnet: Efficient 6-dofgrasp generation in cluttered scenes. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 13438–13444. IEEE, 2021.[35] A. Curtis, X. Fang, L. P. Kaelbling, T. Lozano-P ́erez, and C. R. Garrett. Long-horizon manip-ulation of unknown objects via task and motion planning with estimated affordances. In 2022International Conference on Robotics and Automation (ICRA) , pages 1940–1946. IEEE, 2022.[36] A. Mandlekar, Y . Zhu, A. Garg, J. Booher, M. Spero, A. Tung, J. Gao, J. Emmons, A. Gupta,E. Orbay, S. Savarese, and L. Fei-Fei. RoboTurk: A Crowdsourcing Platform for Robotic SkillLearning through Imitation. In Conference on Robot Learning , 2018.[37] A. Mandlekar, J. Booher, M. Spero, A. Tung, A. Gupta, Y . Zhu, A. Garg, S. Savarese, andL. Fei-Fei. Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulationdataset through human reasoning and dexterity. arXiv preprint arXiv:1911.04052 , 2019.[38] O. Khatib. A unified approach for motion and force control of robot manipulators: The opera-tional space formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987.[39] A. Mandlekar, F. Ramos, B. Boots, S. Savarese, L. Fei-Fei, A. Garg, and D. Fox. Iris: Implicitreinforcement without interaction at scale for learning control from offline robot manipulationdata. In IEEE International Conference on Robotics and Automation (ICRA) , pages 4414–4420. IEEE, 2020.[40] S. G. Hart and L. E. Staveland. Development of nasa-tlx (task load index): Results of empiricaland theoretical research. In Advances in psychology , volume 52, pages 139–183. Elsevier,1988.[41] Y . Zhu, A. Joshi, P. Stone, and Y . Zhu. Viola: Imitation learning for vision-based manipulationwith object proposal priors. 6th Annual Conference on Robot Learning , 2022.[42] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks. IEEE Robotics andAutomation Letters , 7(3):7327–7334, 2022.[43] A. Mandlekar, D. Xu, R. Mart ́ın-Mart ́ın, Y . Zhu, L. Fei-Fei, and S. Savarese. Human-in-the-loop imitation learning using remote teleoperation. arXiv preprint arXiv:2012.06733 , 2020.[44] K. Van Wyk, M. Culleton, J. Falco, and K. Kelly. Comparative peg-in-hole testing of a force-based manipulation controlled robotic hand. IEEE Transactions on Robotics , 34(2):542–549,2018.[45] H. Park, J. Park, D.-H. Lee, J.-H. Park, and J.-H. Bae. Compliant peg-in-hole assembly usingpartial spiral force trajectory with tilted peg posture. IEEE Robotics and Automation Letters ,5(3):4447–4454, 2020.[46] M. J. McDonald and D. Hadfield-Menell. Guided imitation of task and motion planning. InConference on Robot Learning , pages 630–640. PMLR, 2022.11[47] M. Kelly, C. Sidrane, K. Driggs-Campbell, and M. J. Kochenderfer. Hg-dagger: Interactiveimitation learning with human experts. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 8077–8083. IEEE, 2019.[48] J. Spencer, S. Choudhury, M. Barnes, M. Schmittle, M. Chiang, P. Ramadge, and S. Srinivasa.Learning from interventions: Human-robot interaction as both explicit and implicit feedback.In16th Robotics: Science and Systems, RSS 2020 . MIT Press Journals, 2020.[49] Q. Li, Z. Peng, and B. Zhou. Efficient learning of safe driving policy via human-ai copilotoptimization. In International Conference on Learning Representations , 2021.[50] J. S. Warm, R. Parasuraman, and G. Matthews. Vigilance requires hard mental work and isstressful. Human factors , 50(3):433–441, 2008.[51] K. Menda, K. Driggs-Campbell, and M. J. Kochenderfer. Ensembledagger: A bayesian ap-proach to safe imitation learning. In 2019 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 5041–5048. IEEE, 2019.[52] T. Mandel, Y .-E. Liu, E. Brunskill, and Z. Popovi ́c. Where to add actions in human-in-the-loop reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 31, 2017.[53] A. Jonnavittula and D. P. Losey. Learning to share autonomy across repeated interaction. In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages1851–1858. IEEE, 2021.[54] R. Tedrake, M. Fallon, S. Karumanchi, S. Kuindersma, M. Antone, T. Schneider, T. Howard,M. Walter, H. Dai, R. Deits, et al. A summary of team mit’s approach to the virtual roboticschallenge. In 2014 IEEE International Conference on Robotics and Automation (ICRA) , pages2087–2087. IEEE, 2014.[55] R. Luo, C. Wang, E. Schwarm, C. Keil, E. Mendoza, P. Kaveti, S. Alt, H. Singh, T. Padir, andJ. P. Whitney. Towards robot avatars: Systems and methods for teleinteraction at avatar xprizesemi-finals. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 7726–7733. IEEE, 2022.[56] J. M. Marques, N. Patrick, Y . Zhu, N. Malhotra, and K. Hauser. Commodity telepresence withthe avatrina nursebot in the ana avatar xprize semifinals. In RSS 2022 Workshop on “TowardsRobot Avatars: Perspectives on the ANA Avatar XPRIZE Competition , 2022.[57] H. Le, N. Jiang, A. Agarwal, M. Dud ́ık, Y . Yue, and H. Daum ́e III. Hierarchical imitation andreinforcement learning. In International conference on machine learning , pages 2917–2926.PMLR, 2018.[58] K. Shiarlis, M. Wulfmeier, S. Salter, S. Whiteson, and I. Posner. Taco: Learning task decompo-sition via temporal alignment for control. In International Conference on Machine Learning ,pages 4654–4663. PMLR, 2018.[59] Y . Zhu, J. Wong, A. Mandlekar, and R. Mart ́ın-Mart ́ın. robosuite: A modular simulationframework and benchmark for robot learning. In arXiv preprint arXiv:2009.12293 , 2020.[60] A. Mandlekar, S. Nasiriany, B. Wen, I. Akinola, Y . Narang, L. Fan, Y . Zhu, and D. Fox.Mimicgen: A data generation system for scalable robot learning using human demonstrations.InConference on Robot Learning (CoRL) , 2023.[61] A. S. Morgan, B. Wen, J. Liang, A. Boularias, A. M. Dollar, and K. Bekris. Vision-drivencompliant manipulation for reliable, high-precision assembly tasks. RSS, 2021.[62] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discoveringclusters in large spatial databases with noise. In KDD , 1996.12[63] Y . Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kram ́ar, R. Had-sell, N. de Freitas, et al. Reinforcement and imitation learning for diverse visuomotor skills.arXiv preprint arXiv:1802.09564 , 2018.[64] C. Wang, R. Wang, A. Mandlekar, L. Fei-Fei, S. Savarese, and D. Xu. Generalization throughhand-eye coordination: An action space for learning spatially-invariant visuomotor control.In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages8913–8920. IEEE, 2021.[65] C. Wang, C. P ́erez-D’Arpino, D. Xu, L. Fei-Fei, K. Liu, and S. Savarese. Co-gail: Learningdiverse strategies for human-robot collaboration. In Conference on Robot Learning , pages1279–1290. PMLR, 2022.[66] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y . Zhu, and A. Anandkumar. Mimicplay:Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422 ,2023.13AppendixA Table of Contents•FAQ (Appendix B): answers to some common questions•Limitations (Appendix C): more thorough list and discussion of HITL-TAMP limitations•Related Work (Appendix D): discussion on related work•Tasks (Appendix E): full details on tasks and portions handled by TAMP•Additional Data Throughput Comparisons (Appendix F): additional comparisons ondata collection times between HITL-TAMP and conventional teleoperation•Robustness to Pose Error (Appendix G): analysis of HITL-TAMP robustness to incorrectobject pose estimates•Demonstration Statistics (Appendix H): statistics for collected datasets•Queueing System Analysis (Appendix I): analysis on how the size of the fleet influencesdata throughput•Additional Details on TAMP-Gated Teleoperation (Appendix J): full details on howTAMP-gated teleoperation works•Policy Training Details (Appendix K): details on how policies were trained from HITL-TAMP datasets with imitation learning•Low-Dim Policy Training Results (Appendix L): full results for agents trained on low-dimobservation spaces (image agents presented in main text)•TAMP Success Analysis (Appendix M): analysis of TAMP success rates and whetherpolicy evaluations could be biased•Additional Details on Conventional Teleoperation System (Appendix N): additional de-tails on the conventional teleoperation system and why it is a representative baseline.•Additional User Study Details (Appendix O): additional details on how the user studywas conducted.14B Frequently Asked Questions (FAQ)1.How did you select those specific baselines and ablations in Sec. 6?Our experiments showcase the capabilities of HITL-TAMP as (1) a scalable demonstrationcollection system and (2) an efficient learning and control framework. To show its value incollecting human demonstrations over an alternative, we compared it extensively against awidely-adopted conventional teleoperation paradigm used in prior works that collect andlearn from human demonstrations [1, 6, 2, 7, 42, 20, 9, 36, 37, 43, 10, 15, 16, 11] (seeTable 1 and Fig. 6).To show its value in learning policies for manipulation tasks, we investigated the value ofthe core component - the TAMP-gated control mechanism (described in Appendix J). Weshowed that even policies trained on conventional teleoperation data benefit substantiallyfrom incorporating the TAMP-gated control mechanism (Fig. 6). Our TAMP-gated controlis a novel control algorithm made possible by key technical components of HITL-TAMP(as described in Sec. 3).There are other systems that are designed for specific contact-rich manipulation (such aspeg insertion [44, 45]), but HITL-TAMP was not designed to be specialized for any specifictask. Rather, it was meant to be a general-purpose system that can be applied to any contact-rich, long-horizon manipulation task, as long as the task can be demonstrated by a humanoperator, and described in PDDLStream.2.How does this work compare with other works that combine imitation learning andTAMP?Prior works, such as [46], trained agents in simulation to imitate demonstration data pro-vided by a TAMP supervisor in simulation. In this way, during deployment, an agent canoperate without privileged information (such as object poses) required by TAMP. How-ever, this setting makes a strong assumption that the TAMP system can already solve thetarget tasks. By contrast, our work extends a TAMP system’s capabilities using an agenttrained on human demonstration segments collected by HITL-TAMP (training details inAppendix K) in order to solve complex contact-rich tasks in the real world. Training anagent on the TAMP segments collected by HITL-TAMP in order to enable TAMP-free pol-icy deployments is an exciting application for future work. However, it is orthogonal to themain contributions in this paper.3.What are the trade-offs between the effort to provide demos and the effort to designmodels and controllers used in TAMP?Collecting a large number of human demos can be labor and time intensive [11, 7, 37],but extensive modeling of a task for TAMP can similarly be time-consuming. Our systemachieves a good tradeoff, by lessening the modeling burden for TAMP by deferring difficulttask segments to the human, and lessening the human operator burden by only asking themto operate small segments of a task. When deploying HITL-TAMP (especially in real-world settings), there is significant flexibility in deciding what information is available tothe TAMP system in order to automate portions of a task, and which portions of a taskshould instead be deferred to a human operator (or trained agent).4.How does the TAMP system determine which parts of a task plan require a humanoperator?We formalize human-teleoperated TAMP skills in Sec. 3.1. While their discrete structureis provided by a human (e.g. which objects are involved), our novel action constraint learn-ing technique (Sec. 3.2) characterizes their continuous action parameters. Human model-ers have flexibility in deciding which skills should be teleoperated based on the contact-richness and required precision of the interaction. In our experiments, we used a priorunderstanding of the TAMP system and the limits of planners and perception to determinewhich parts would require human teleoperation. Other practical alternatives include usinguncertainty estimates from perception, or directly applying TAMP to tasks of interest, and15observing sections of failure. Fig. E.1 (in Appendix E) showcases the parts of each task thatare handled by the TAMP system and the parts that are handled by the human (or trainedagent).5.What assumptions are needed to apply HITL-TAMP to real-world settings, as op-posed to simulation?Typically, TAMP systems place a high burden on real-world perception, as accurate percep-tion and dynamics models are often needed by TAMP for planning. Part of the motivationof our work was to reduce this requirement. While we do assume knowledge of crude objectmodels and the ability to associate objects (see Sec. 6.3), we use a very simple perceptionpipeline in this work. We show that this simple pipeline suffices, even for the challengingTool Hang task in the real-world since a human or an end-to-end trained policy handlesthe most challenging, contact-rich interactions. See Appendix G for additional validationthat HITL-TAMP can tolerate noisy perception.6.Why are some of the settings for the real-world Tool Hang task different from theother real-world tasks?The data collection and policy learning methodology are identical to the other tasks, butthere are a few minor differences. We used an increased resolution of 240x240 for theRGB images (instead of 120x120) due to the need for high-precision manipulation. We alsoexcluded the wrist-view in observations provided to the trained agent, since we found that itwas completely occluded during the human portions of the task. Finally, we evaluated ouragent over 25 episodes (instead of 50 evaluation episodes used for the other tasks), becausepolicy evaluation for this task is significantly more time-consuming) and obtained a tasksuccess rate of 64%, along with a frame insertion rate of 88%.7.Why are TAMP plans carried out with a joint position controller, while human tele-operation and learned policies use an OSC controller?Our TAMP system creates plans directly in joint space, so we are able to carry out and trackmotion plans with higher fidelity by using a joint position controller. On the other hand,human teleoperation requires an end effector controller (we use OSC [38]) to provide anintuitive mapping between the user device and robot control. Consequently, we switchbetween these two controllers depending on whether the TAMP system or the human isoperating the robot. See Appendix J for more information.16C LimitationsIn this section, we discuss some limitations of HITL-TAMP, which future work can address.1.Applicable tasks. Our general-purpose system can be deployed on any tasks that (1) can bedescribed in PDDLStream and (2) human operators can demonstrate. We did not engineerthe system for any specific task — our system greatly extends the set of tasks that can besolved when compared to TAMP alone.2.Task variety. The tasks in this work are focused on tabletop domains, and there is limitedobject variety in each task. Scaling HITL-TAMP to work for more scenes and objectsrequires a richer set of assets and scenes (in simulation) and a more robust perceptionpipeline in the real world.3.Prior information on what is difficult for TAMP. HITL-TAMP requires prior infor-mation (at a high-level) on which task portions will be difficult for TAMP. Being able toautomatically identify when human demonstrations are needed (e.g. based on uncertaintyestimates from perception) is left for future work.4.Perception for TAMP. We assume access to coarse object models and approximate poseestimation in order to conduct the TAMP segments. Future work could relax this assump-tion by integrating TAMP methods that do not require object models [35].17D Related WorkD.1 Demonstration Collection Systems for Robot ManipulationRecent studies have shown the effectiveness of teaching robots manipulation skills through humandemonstration [6, 1, 2, 7, 8, 9]. High-quality, large-scale demonstrations are crucial to this suc-cess [2]. Although recent advancements have made demonstration collection systems more scalableand user-friendly [6, 36], collecting a substantial amount of high-quality, long-horizon demonstra-tions remains time-consuming and labor-intensive [2]. On the other hand, intervention-based sys-tems [47, 43, 48, 49] allow the demonstrator to proactively correct for near-failure cases. How-ever, such systems require users to constantly monitor robot task executions, which is equallytime-consuming and sometimes more cognitively-demanding than demonstrating a task [50]. Oursystem uses a TAMP-gated mechanism that automatically switches control between the robot andthe demonstrator. The mechanism also enables a user to demonstrate for multiple sessions asyn-chronously, dramatically increasing the throughput of task demonstration.A number of recent works have also investigated automatic control hand-offs in the context of onlineimitation learning [12, 13, 14, 15, 16, 51, 52, 53]. These works have largely focused on iterativelyimproving a single learned policy, and the gating mechanisms rely on predicting task performancesand action uncertainties, which are often policy and data-specific. Our work instead proposes toaugment a TAMP system with imitation-learned policies. The symbolic abstractions of the TAMPsystem readily delineate TAMP’s capabilities and can be used to determine the conditions for controlhand-offs.Our HITL-TAMP also acts as a TAMP-assisted teleoperation system. However, unlike most priorworks in assisted robot teleoperation, for which the aims are for humans to provide high-level guid-ance for low-level autonomous control [54, 55, 56], HITL-TAMP focuses on allowing human teleop-erators to ”fill the gap” for a TAMP system to complete goal-directed tasks and enabling the systemto become more autonomous by learning skills from the human demonstrations.D.2 Learning for Task and Motion PlanningTask and Motion Planning (TAMP) is a powerful approach for solving challenging manipu-lation tasks by breaking them into smaller, easier to solve symbolic-continuous search prob-lems [5, 22, 4, 23]. However, TAMP requires prior knowledge of skills and environment models,making it unsuitable for contact-rich tasks where hand-defining models is difficult. Recent workshave proposed to learn environment dynamic models [24, 25, 26], skill operator models [27, 28], andskill samplers [29, 30]. However, these methods still require a complete set of hand-crafted skills.Closest to our work are LEAGUE [31] and Silver et al. [32] that learn TAMP-compatible skills.However, both works are limited in their real-world applicability. LEAGUE relies on hand-definedTAMP plan sampler and expensive RL procedures to learn skills in simulation, while Silver et al.requires hard-coded demonstration policies that can already solve the target tasks. Our work insteadleverage human demonstrations to both train visuomotor skills and informing TAMP plan sampling.We empirically show that HITL-TAMP can efficiently solve challenging tasks such as making coffeein the real world.D.3 Imitation Learning from Human DemonstrationsImitation learning techniques based on deep neural networks have shown remarkable performancesin solving real-world manipulation tasks [6, 1, 10, 2, 7, 11]. We take a data-centric view [8, 2, 11] toscaling up imitation learning — HITL-TAMP speeds up demonstration collection for a wide range ofcontact-rich manipulation tasks. A trained HITL-TAMP also acts as a hierarchical policy [57]. Thekey difference to pure data-driven approaches [10, 57, 39, 8, 58] is that in HITL-TAMP, the TAMPframework directly drives the hierarchy to ensure that the learned skills are modular and compatible.Similarly, our work builds on research in combining learned and predefined skills [17, 18, 19, 20, 21]and formalizes human demonstrations and learned skills within a TAMP framework.18E TasksTAMPHumanTAMPHumanTAMPHumanHumanTAMPTAMPTAMPTAMPHumanHumanHumanTAMPTAMPHumanHumanTAMPHumanTAMPHumanTAMPTAMPTAMPTAMPHumanHumanHumanHumanSquare (sim)Three Piece Assembly (sim)ToogHang (sim)Coffee (sim)Coffee Full Preparation (sim)Coffee (reall)Stack Three (reall)Tool Hang (reall)Figure E.1: Task Segments. We show the human and TAMP segments for each task.In this section, we present extended task descriptions for each task, including a breakdown of whichsegments the human controls and which TAMP handles (see Fig. E.1).Stack Three (real). The robot must stack 3 randomly placed cubes. The task consists of 4 totalsegments — TAMP handles grasping each cube and approaching the stack, and the human handlesthe placement of the 2 cubes on top of the stack.Square [59, 1] (sim). The robot must pick a nut and place it onto a peg. The nut is initialized in asmall region and the peg never moves. This task consists of two segments — TAMP grasps the nutand approaches the peg, and the human inserts the nut onto the peg.Square Broad (sim). : The nut and peg are initialized anywhere on the table.Coffee [43] (sim + real). The robot must pick a coffee pod, insert it into a coffee machine, and closethe lid. The pod starts at a random location in a small, box-shaped region, and the machine is fixed.The task has two segments — TAMP grasps the pod and approaches the machine, and the humaninserts the pod and closes the lid.19Coffee Broad (sim + real). The pod and the coffee machine have significantly larger initializationregions. With 50% probability, the pod is placed on the left of the table, and the machine on theright side, or vice-versa. Once a side is chosen for each, the machine location and pod location arefurther randomized in a significant region.Three Piece Assembly [60] (sim). The robot must assemble a structure by inserting one piece intoa base and then placing a second piece on top of the first. The two pieces are placed around thebase, but the base never moves. The tasks consists of four segments — TAMP grasps each piece andapproaches the insertion point while the human handles each insertion.Three Piece Assembly Broad (sim). The pieces are placed anywhere in the workspace.Tool Hang [1] (sim + real). The robot must insert an L-shaped hook into a base piece to assemblea frame, and then hang a wrench off of the frame. The L-shaped hook and wrench vary slightlyin pose, and the base piece never moves. The task has four segments — TAMP handles graspingthe L-shaped hook and the wrench, and approaching the insertion / hang points, while the humanhandles the insertions.Tool Hang Broad (sim). All three pieces move in larger regions of the workspace.Coffee Full Preparation [60] (sim). The robot must place a mug onto a coffee machine, retrievea coffee pod from a drawer, insert the pod into the machine, and close the lid. The task has 8segments — first TAMP grasps the mug and approaches the placement location, then the humanplaces the mug on the coffee machine (the placement requires precision due to the arm size andspace constraints). Next, TAMP approaches the machine lid, and the human opens the lid (requiresextended contact with an articulated mechanism). Then, TAMP approaches the drawer handle, andthe human opens the drawer. Finally, TAMP grasps the pod from inside the drawer and approachesthe machine, and the human inserts the pod and closes the machine lid.20F Additional Data Throughput ComparisonsTask HITL-TAMP Time (min) Conventional Time (min)Square 13.5 35.0Square Broad 14.0 48.0Coffee 22.6 46.4Coffee Broad 28.8 57.8Tool Hang 48.0 97.1Tool Hang Broad 51.5 109.8Three Piece Assembly 30.0 60.0Three Piece Assembly Broad 34.9 68.3Coffee Preparation 78.4 132.7Total 321.7 655.1Table F.1: Collection time comparison to conventional teleoperation datasets. An extended comparison ofdata collection time for 200 demos across several tasks for both HITL-TAMP and the conventional teleoperationsystem. Some items were estimated using the time spent collecting 10 human demonstrations.In this section, we compare how long it would have taken to collect our 2.1K+ HITL-TAMP demon-strations with a conventional teleoperation system. The results are shown in Table F.1. Several ofthe numbers were estimated by collecting 10 human demonstrations and multiplying by 20 (due tothe time burden of collecting 200 human demonstrations across all tasks with a conventional teleop-eration system). In most cases, HITL-TAMP takes more than 2x fewer minutes to collect 200 demosthan the conventional system.21G Robustness to Pose ErrorDataset L0 L1 L2Square (L1) 100.0±0.0 100 .0±0.0 99 .3±0.9Square (L2) 100.0±0.0 100 .0±0.0 100 .0±0.0Coffee (L1) 100.0±0.0 100 .0±0.0 91 .3±2.5Coffee (L2) 100.0±0.0 99 .3±0.9 98 .0±1.6Table G.1: HITL-TAMP Robustness to Pose Noise. We added uniform pose noise to all object poses per-ceived by our TAMP system. We use two levels of uniformly sampled noise - L1 is 5 mm of position noiseand 5 degrees of rotation noise, and L2 is 10 mm of position noise and 10 degrees of rotation noise. For eachlevel of noise, we collected 200 demonstrations with our HITL-TAMP system, trained image-based agents onthese datasets, and evaluated the agents on the L0 (no noise), L1 noise, and L2 noise setting. The agents onlyperceive camera images and robot proprioception ( i.e.not object poses), and the TAMP system receives noisyobject poses. The results show that HITL-TAMP agents retain strong performance.Since TAMP plans to pre-contact poses (constraints learned from human demos), errors in the hand-off location to the human operator are completely tolerable, as the human can account for any dif-ferences during their demonstration. Our real-world experiments in Fig. 5 best demonstrate therobustness of our system to pose error. For Coffee, we used an extremely crude box model of thecoffee machine without any fine-grained pose registration. For ToolHang, the stand is not accuratelycaptured in the observed point cloud due to the thinness of the stand base and column. Consequently,pose registration is naturally noisy. Despite these problems with perception, we were able to achievehigh success rates in both tasks with few demonstrations.In this section, we conduct an additional experiment in simulation to obtain quantitative evidenceof HITL-TAMP’s robustness to object pose estimation. We first describe our noise model. Weadded uniform pose noise to all object poses perceived by our TAMP system. We use two levelsof uniformly sampled noise - L1 is 5 mm of position noise and 5 degrees of rotation noise andL2 is 10 mm of position noise and 10 degrees of rotation noise [61]. For each level of noise, wecollected 200 demonstrations with our HITL-TAMP system (to be consistent with Fig. 6) on theSquare and Coffee tasks, resulting in 4 new datasets in total. We then trained image-based agentson these datasets, and evaluated the agents on the L0 (no noise), L1 noise, and L2 noise setting. Weemphasize that the agents only perceive camera images and robot proprioception, not object poses,and the TAMP system receives noisy object poses.The results are presented in Table G.1. Each row corresponds to agents trained on one of our newdatasets and each column corresponds to different levels of noise applied to the TAMP system duringpolicy evaluation. Recall that we report the success rates across 50 evaluations, where there are noTAMP failures, and 3 seeds (discussed further in Appendix K and Appendix M).When evaluating the agents trained on the L1 and L2 datasets on the same levels, the results arenear-perfect (100% success rate for all except Coffee L2, which gets 98% success), which alignswith the 100% success achieved by our agents on our noise-free datasets (see Fig. 6, left). We alsofound that training on higher amounts of noise gives our trained agents some level of robustness tolower amounts of noise ( e.g.evaluating the L2 models on L0 and L2).We also analyze the execution failure rate of the TAMP system itself (which corresponds to howoften we terminate an episode due to a failed grasp of the nut / coffee pod, or dropping the object inhand). We found that the TAMP system failure rate increases by level: from 0% on L0 to 6% on L1and 23% on L2 for Square, and from 0% on L0 to 6% on L1 and 24% on L2 for Coffee. This is tobe expected, as erroneous poses can lead to a bad grasp. In such settings where perception errors tocause grasp failures, we could easily have the human teleoperate the grasping part of each trajectoryas well during data collection and then have the trained agent learn that task segment as well.22H Demonstration StatisticsTask Human Trajectory (HT) Trajectory (C)Square 19.8 582 .2 150 .8Square Broad 24.2 647 .8 167 .9Coffee 71.6 472 .0 199 .3Coffee Broad 90.6 663 .7 273 .8Tool Hang 70.4 1297 .9 479 .8Tool Hang Broad 71.3 1485 .8 522 .6Three Piece Assembly 35.3 897 .9 260 .1Three Piece Assembly Broad 39.6 1174 .1 342 .0Coffee Preparation 43.8 1328 .6 593 .2Stack Three (real) 60.9 499 .2 -Coffee (real) 295.3 494 .9 -Coffee Broad (real) 326.5 548 .3 -Tool Hang (real) 124.3 1144 .5 -Table H.1: Demonstration Lengths. For each task, we report the average length (time steps) of the humansegment, the average trajectory length of our HITL-TAMP datasets (HT), and as a point of comparison, theaverage trajectory length of the conventional system data (C). Note that if a trajectory contains multiple humansegments, we average them.In Table H.1, we present the average length (time steps) of the human-provided segment, the averagetrajectory length of our HITL-TAMP datasets (HT), and as a point of comparison, the average tra-jectory length of the conventional system data (C). Note that if a trajectory contains multiple humansegments, we average across them, and that some of the conventional system lengths are estimatesbased on collecting 10 trajectories (the same ones used for the analysis in Appendix F). We see thatthe average human segment is small compared to the entire trajectory length — this might help ex-plain the efficacy of our TAMP-gated policy, since the policy is only responsible for short-horizon,contact-rich behaviors.23I Queueing System AnalysisIn Sec. 4 and Fig. 3, we discussed our queueing system, which enables scalable data collectionwith HITL-TAMP by allowing a single human operator to manage a fleet of Nrobotrobot arms andensuring that the human operator is always kept busy. In this section, we provide some additionalderivations and analysis on how the choice of the number of robot arms influences data throughput.Assuming that the human has an average queue consumption rate (number of task demonstrationscompleted per unit time) of RHand the TAMP system has an average queue production rate (numberof task segments executed successfully per unit time) of RT, we would like the effective rate ofproduction to match or exceed the rate of consumption,RT(Nrobot−1)≥RH.Here, the minus 1 is because 1 robot is controlled by the human. Rearranging, we obtain Nrobot≥1 +RHRT. Thus, the size of the fleet should be at least one more than the ratio between the human rateof producing demonstration segments and the TAMP rate of solving and executing segments.This number is often limited by either the amount of system resources (in simulation) or the avail-ability of hardware (in real world). In practice, human operators also need to take breaks and havean effective ”duty cycle” where they are kept busy X%of the time. HITL-TAMP can support thisextension as well. Assume that the human is operating the system for Tonand resting for Toff. Thehuman consumes items in the queue during Tonat an effective rate ofRH−RT(Nrobot−1),and has the queue filled up during Toffat a rate of RT(Nrobot−1). Ensuring that the human con-sumption rate is less than or equal to the production rate, we haveTon(RH−RT(Nrobot−1))≤ToffRT(Nrobot−1).After rearranging we arrive atNrobot≥1 +RHRTX100,whereX100=Ton(Ton+Toff)is the human duty cycle ratio.24J Additional Details on TAMP-Gated TeleoperationWe provide additional details on how TAMP-gated teleoperation works. The TAMP system de-cides when to execute portions of a task, and when a human operator should complete a portion.Each teleoperation episode consists of one or more handoffs where the TAMP system prompts ahuman operator to control a portion of a task, or where the TAMP system takes control back after itdetermines that the human has completed their segment.Algorithm 1 displays the pseudocode of the HITL-TAMP system: TAMP -GATED -CONTROL . Ittakes as input goal formula G. On each TAMP iteration, it observes the current state s. If itsatisfies the goal, the episode terminates successfully. Otherwise, the TAMP system solves fora plan ⃗ ausing PLAN -TAMP from current state sto the goal G. We implement PLAN -TAMP us-ing the adaptive PDDLStream algorithm [23]. The TAMP system then deploys its controllerEXECUTE -JOINT -COMMANDS and issues joint position commands to the robot to carry out plannedmotions until reaching an action athat requires the human. At this time, control switches into tele-operation mode, where the human has full 6-DoF control of the end effector. We use a smartphoneinterface and map phone pose displacements to end effector displacements, similar to prior tele-operation systems [36, 37, 10]. The robot end effector is controlled using an Operational SpaceController [38]. As in [43], we apply phone pose differences as relative pose commands to the cur-rent end effector pose. This allows control to be decoupled from the current configuration of therobot arm, which is important as the TAMP system can prompt the human to takeover in diverseconfigurations. While the human is controlling the robot, the TAMP system monitors whether thestate satisfies the planned action postconditions a.effects . Once satisfied, control switches back tothe TAMP system, which replans.Algorithm 1 TAMP-Gated Teleoperation1:procedure TAMP -GATED -CONTROL (G)2: while True do3: s←OBSERVE () ▷Estimate or observe state4: ifs∈Gthen ▷State satisfies goal5: return True ▷Success!6: ⃗ a←PLAN -TAMP (s, G) ▷Solve for a plan ⃗ a7: fora∈⃗ ado ▷Iterate over actions8: if not IS-HUMAN -ACTION (a)then9: EXECUTE -JOINT -COMMANDS (a)10: else11: while OBSERVE ()/∈a.eff do12: EXECUTE -TELEOP () ▷Teleoperation13: break ▷Re-observe and re-planJ.1 Example PlanConsider a plan found by the TAMP system for the Tool Hang task on the first planning invocation:⃗ a1= [move (q0, τ1, q1),pick (frame , gf,pf0, q1),move (q1, τ2, q2),attach (frame , gf, p2, q2,bpf2,bq2,stand ),move (bq2,bτ3, q3),pick (tool, gt,pt0, q3),move (q3, τ4, q4),attach (tool, gt, p4, q4,bpt4,bq4,frame )].The values in bold represent constants present in the initial state; the non-bold values are parametervalues selected by the planner. The learned preimages enable the TAMP system to plan not onlya trajectory τ1to the first manipulation but also to the second manipulation τ2. However, becausethe third trajectory bτ3depends on the resultant configuration bq2, planning for it is deferred. Uponsuccessfully achieving Attached (frame ,stand ), replanning produces a new plan.25K Policy Training DetailsIn this section, we detail how we train policies via imitation learning from the human segments ofHITL-TAMP datasets. Many choices are mirrored from Mandlekar et al. [1].Figure K.1: Training and Testing Policies. The top row shows the HITL-TAMP policy at training time, wherea human teleoperates certain segments, such as opening the coffee machine lid. The bottom row shows theHITL-TAMP policy at testing time, where the human segments are replaced with a learned policy trained withbehavior cloning using the collected training data.K.1 Observation SpacesIn our experiments, policies are either trained on low-dim state observations or image observations— this kind of flexibility is advantageous as it eases the burden of perception for deploying TAMPsystems in the real world. Low-dim observations include ground-truth object poses, while imageobservations consist of RGB images from a front-view camera and a wrist-mounted camera. Bothobservations include proprioception (end-effector pose and gripper finger width). In simulation, theimage resolution is 84x84, while in real world tasks, we use a resolution of 120x160 for Stack Three,Coffee, and Coffee Broad, and a resolution of 240x240 for Tool Hang. Our real-world agents are allimage-based, since we do not assume that objects can be tracked. The real-world Tool Hang agentdid not use the wrist-view in observations, since we found that it was completely occluded duringthe human portions of the task. The TAMP system only estimates poses at the start of each episode.We use a simple perception pipeline consisting of RANSAC plane estimation to segment the tablefrom the point cloud, DBSCAN [62] to cluster objects, color-based statistics to associate objects,and Iterative Closest Point (ICP) to estimate object poses. For image-based agents, we apply pixelshift randomization (up to 10% of each image dimension) as a data augmentation technique (as inMandlekar et al. [1]).K.2 Action SpaceAs described in Sec. 3.3, we collect training data using teleoperation through 6-DoF end-effectorcontrol, where an Operational Space Controller [38] interprets delta end-effector actions and con-verts them to joint commands. Thus, the action space for all policy learning is also 6-DoF end-effector poses.K.3 Training and EvaluationWe use BC-RNN with default hyperparameters from Mandlekar et al. [1] with the exception of anincreased learning rate of 10−3for policies trained on low-dim observations, to train policies from26the human segments in each dataset. We follow the policy evaluation convention from Mandlekaret al. [1], and report the maximum Success Rate (SR) across all checkpoint evaluations over 3seeds, which is evaluated over 50 rollouts. However, the TAMP system can fail during a rollout.To decouple TAMP failures from policy failures, we keep conducting rollouts for each checkpointuntil 50 rollouts with no TAMP failures have been collected, and compute policy success rate overthose rollouts (discussion in Appendix M). In the real world, we take the final policy checkpoint fromtraining, and use it for evaluation. Fig. K.1 visualizes the difference between the HITL-TAMP policyat training time, where teleoperation is used, and at testing time, where teleoperation is substitutedwith a learned policy for fully autonomous control.27L Low-Dim Policy Training ResultsTask Time (min) SR (im) TAMP-gated SR (im)Square (C) 25.0 84 .0±0.0 91 .3±5.2Square (HT) 13.5 100 .0±0.0 100 .0±0.0Square Broad (C) 48.0 29 .3±0.0 88 .0±1.6Square Broad (HT) 14.0 100 .0±0.0 100 .0±0.0Three Piece Assembly (C) 60.0 55 .3±0.0 96.0±2.8Three Piece Assembly (HT) 30.0 100 .0±0.0 100 .0±0.0Tool Hang (C) 80.0 29 .3±0.0 60 .0±19.6Tool Hang (HT) 48.0 80 .7±1.9 80 .7±1.9Table L.1: Comparison to conventional teleoperation datasets (low-dim). We trained normal and TAMP-gated policies using conventional teleoperation (C) and compared them to HITL-TAMP (HT). TAMP-gatingmakes policies trained on the data comparable to HITL-TAMP data, but data collection still involves signifi-cantly higher operator time.In Table 6 and Sec. 6.2, we only presented results with image policies. In this section, we show thatHITL-TAMP still compares favorably to conventional teleoperation data when trained on low-dimobservations. The results are presented in Table L.1.28M TAMP Success AnalysisTask Time (min) SR (low-dim) SR (image) TAMP SR (low-dim) Raw SR (low-dim) TAMP SR (image) Raw SR (image)Square 13.5 100 .0±0.0 100 .0±0.0 77 .7±1.5 77 .7±1.5 82 .0±1.9 82 .0±1.9Square Broad 14.0 100 .0±0.0 100 .0±0.0 81 .2±2.7 81 .2±2.7 76 .1±5.1 76 .1±5.1Coffee 22.6 100 .0±0.0 100 .0±0.0 100 .0±0.0 100 .0±0.0 100 .0±0.0 100 .0±0.0Coffee Broad 28.8 99 .3±0.9 96 .7±0.9 98 .1±1.6 97 .4±0.9 97 .4±0.9 94 .2±0.1Tool Hang 48.0 80 .7±1.9 78 .7±0.9 97 .4±1.8 78 .6±2.9 97 .4±1.8 76 .6±1.2Tool Hang Broad 51.5 49 .3±1.9 40 .7±0.9 88 .8±1.9 43 .8±0.8 93 .8±0.8 38 .1±1.1Three Piece Assembly 30.0 100 .0±0.0 100 .0±0.0 96 .2±1.5 96 .2±1.5 95 .0±2.3 95 .0±2.3Three Piece Assembly Broad 34.9 84 .7±4.1 82 .0±1.6 71 .4±0.0 60 .5±2.9 76 .0±4.0 62 .3±4.3Coffee Preparation 78.4 96 .0±3.3 100 .0±0.0 80 .9±4.8 77 .6±4.4 83 .8±1.8 83 .8±1.8Table M.1: Analyzing TAMP Success Rates during Policy Evaluations. A more complete set of results fromTable 6 on HITL-TAMP datasets to demonstrate that policy evaluations do not have significant bias by onlyevaluating in regions where TAMP is successful. All TAMP success rates are high (above 70%) and most areabove 88%.Recall that when evaluating a trained policy, to decouple TAMP failures from policy failures, wekeep conducting rollouts for each checkpoint until 50 rollouts with no TAMP failures have beencollected, and compute policy success rate over those rollouts. In certain cases, this procedure couldlead to biased evaluations — for example, if TAMP is only successful for an object in a limitedregion of the robot workspace. In this section, we present the TAMP success rates and raw successrates (including TAMP failures) for the policies in Table 6 (left), and demonstrate that it is unlikelythat such bias exists in our evaluations. We present the results in Table M.1 — note that the Timeand SR columns are reproduced from Table 6 (right) for ease of comparison. We see that all TAMPsuccess rates are high (above 70%) and most are above 88%.29N Additional Details on Conventional Teleoperation SystemIn this section, we provide additional details on the conventional teleoperation system that we com-pared against in this work ( e.g.in Table 1 and Fig. 6) as well as explain why it is a representativebaseline to compare against. Prior works in imitation learning leveraged robot teleoperation systemsto allow for full 6-DoF control of a robot manipulator. These systems typically map the state of ateleoperation device, such as a Virtual Reality controller [6], a 3D mouse [63], a smartphone [36, 37],or a point-and-click web interface [9], to a desired robot end effector pose. They also use an end-effector controller to try and achieve the desired pose specified by the teleoperation device. Theoperator controls the robot arm in real-time by using the teleoperation device.This teleoperation paradigm has been used extensively in prior work that collects and learns fromhuman demonstrations [1, 6, 2, 7, 42, 20, 9, 36, 37, 43, 10, 15, 16, 11]. In this work, we comparedagainst the RoboTurk [36, 37] system and smartphone interface, which has been used in several priorimitation learning works [10, 39, 1, 64, 65, 66]. It was also used to collect datasets for the robomimicbenchmark [1], whose results we also compare against (see Sec. 6.2). This makes it an appropriatebaseline. However, it is important to note that our HITL-TAMP system is not specific to a particularteleoperation interface – in fact, our system is also compatible with a 3D mouse interface [63].30O Additional User Study DetailsIn this section, we provide additional details on how the user study was conducted. We recruited15 participants that had varying levels of experience with robot teleoperation: 4 participants wereunfamiliar, 6 were somewhat familiar, and 5 were very familiar with it. The purpose of the studywas to compare our system (HITL-TAMP) to a conventional teleoperation system [36], where taskdemonstrations were collected without TAMP involvement. Participants underwent a brief tutorial(5-10 minutes) to familiarize themselves with the smartphone teleoperation interface and to practicecollecting task demonstrations using both systems.Each participant performed task demonstrations on 3 tasks ( Coffee ,Square (Broad) , and ThreePiece Assembly (Broad) ) for 10 minutes on each system, totaling 60 minutes of data collectionacross the 3 tasks and 2 systems. To reduce bias, the order of systems was randomized for eachtask and user (while maintaining the task order). Participants filled out a post-study survey to ranktheir experience with both systems. Each participant’s number of successful demonstrations wasrecorded to evaluate the data throughput of each system, and agents were trained on each partici-pant’s demonstrations and across all participants’ demonstrations (Sec. 6.1). See Appendix K forfull details on policy training.All demonstrations were collected on a single workstation with an NVIDIA GeForce RTX3090GPU. We used 6 robot processes ( Nrobot= 6) to ensure that human operators were always kept busy(see Sec. 4).31 |
Q9ezhChqnL | Towards Scalable Coverage-Based Testing ofAutonomous VehiclesJames Tu1;2Simon Suo1;2Chris Zhang1;2Kelvin Wong1;2Raquel Urtasun1;21Waabi2University of Torontofjtu,czhang,kwong,urtasun g@waabi.ai suo@cs.toronto.eduAbstract: To deploy autonomous vehicles(A Vs) in the real world, developersmust understand the conditions in which the system can operate safely. To dothis in a scalable manner, A Vs are often tested in simulation on parameterizedscenarios. In this context, it’s important to build a testing framework that par-titions the scenario parameter space into safe, unsafe, and unknown regions [1].Existing approaches rely on discretizing continuous parameter spaces into bins,which scales poorly to high-dimensional spaces and cannot describe regions witharbitrary shape. In this work, we introduce a problem formulation which avoidsdiscretization — by modeling the probability of meeting safety requirements ev-erywhere, the parameter space can be paritioned using a probability threshold.Based on our formulation, we propose GUARD as a testing framework whichleverages Gaussian Processes to model probability and levelset algorithms to ef-ficiently generate tests. Moreover, we introduce a set of novel evaluation metricsfor coverage-based testing frameworks to capture the key objectives of testing. Inour evaluation suite of diverse high-dimensional scenarios, GUARD significantlyoutperforms existing approaches. By proposing an efficient, accurate, and scal-able testing framework, our work is a step towards safely deploying autonomousvehicles at scale.Keywords: Testing, Coverage, Self-Driving1 IntroductionAutonomous vehicles (A Vs) will soon become a staple in ground transportation—interacting withbillions of people everyday. At this scale, A Vs can drastically reduce accidents, relieve traffic con-gestion, and provide mobility for those who cannot drive. In order to realize this future, developersmust first ask the question: “Is the AV safe enough to be deployed in the real world?” To understandhow safe the A V is, it’s important to identify in which scenarios the A V is safe or unsafe with respectto requirements defined by safety experts [1, 2]. Towards this goal, it’s important to build a testingframework which covers the wide range of scenarios in the A V’s operational domain and classifythem as safe, unsafe, or unknown.To cover the wide range of real-world scenarios in a scalable manner, the A V industry often buildstesting frameworks in simulation [3, 4, 5] where the traffic environment is fully controllable andlong-tail events can be synthesized. A popular strategy to describe real world events in simulationinvolves designing parameterized scenarios [6]. These scenarios describe semantics of the environ-ment (e.g. truck merging from on-ramp) and its parameters specify low-level characteristics (e.g.velocity of traffic participants). Each parameter configuration then corresponds to a concrete testwhich can be executed in simulation to determine if the A V complies with functional safety require-ments [1]. This is typically captured as a binary pass or fail determined by regulatory demand [7].For example, an A V could fail if it violates a safety distance threshold.However, directly covering all variations in a scenario’s parameter space can be infeasible. Contin-uous parameters imply infinitely many variations and testing frameworks can only execute a finitenumber of concrete tests which is limited by computation budget. As a result, to cover the parameterspace the testing framework must leverage observed test results to estimate if the A V will pass orfail on unseen test configurations. Furthermore, the testing framework must also efficiently chooseconcrete tests to execute to maximize coverage and accurately estimate A V performance.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Pass Fail Uncertain Probabilistic Model Performance Estimate Uncertainty Probabilistic Model of Test Outcome Testing Process Pass Fail Actor accel Actor velocity Parameterized Scenario Testing certain uncertain Predict Outcome of Unseen Tests Execute Test Update model Select test -Near Boundary -Uncertain Figure 1: Left: Parameterized scenario testing. Middle: Probabilistic model estimates pass/failoutcomes in parameter space. Right: Testing process—sample and execute tests, then update model.Existing approaches rely heavily on discretization to reduce the infinite continuous parameter spaceinto discrete bins, making the assumption that tests in the same bin yield the same result. Whileeffective for simple 2-dimensional scenarios [8, 9], they scale poorly to high dimensional scenariosdue to exponentially growing number of bins. Alternatively, another common approach involvesgenerating adversarial examples [10, 11, 12] to discover failures. While revealing failure cases isuseful, these methods forgo covering the parameter space and cannot provide an understanding ofA V performance across the parameter space.In this work, we formulate the testing process as sequencing a finite number of tests to estimate theprobability of passing or failing across the entire parameter space, thereby avoiding discretizationaltogether. To start, we can leverage observed test results to model the probability of passing orfailing across the entire parameter space. Intuitively, if the A V easily passed a particular test, it canpass similar tests with high probability. Moreover, for an unseen test with little information fromsimilar tests, the pass/fail outcome should be ambiguous. By modeling the probability everywhere,it follows that a probability threshold can partition the parameter space into 1) pass regions, 2) failureregions, and 3) uncertain regions, which is the output prescribed by standard safety guidelines [1, 2].Based on our formulation, we propose our testing framework GUARD. Specifically, we leverageGaussian Processes (GPs) to model the probability of passing and failing. To achieve high sampleefficiency, we adopt levelset algorithms which adaptively generates tests that balance exploration(of high uncertainty regions) and exploitation (by sampling near the pass/fail boundary). More-over, GUARD leverages online GP kernel learning, allowing it to scale to different parameterizedscenarios with minimal hyperparameter tuning.To evaluate testing frameworks, we propose a suite of novel evaluation metrics which capture thekey objectives of testing: achieving high coverage, evaluating safety of the A V, and exposing failurecases for autonomy development. Under these metrics, GUARD is able to outperform existing test-ing frameworks across a diverse set of complex and high-dimensional driving scenarios. Moreover,we find that by avoiding discretization, our approach scales much more efficiently as the dimension-ality of test scenarios grow. Finally, GUARD can also benchmark different iterations of autonomymodels while identifying specific examples of scenarios where regression occurs.2 Related Work2.1 Coverage-Based TestingA Vs are often tested in simulation using parameterized scenarios [6, 13, 14] where it receives a passor fail based on safety requirements [7]. Each parameter configuration corresponds to a concretetest and testing frameworks execute concrete tests to cover the parameter space. Since continuousscenario parameters imply an infinite amount of scenario variations, existing works discretize the pa-rameter space into bins — assuming all pass/fail outcomes in a bin are identical. This discretizationintroduces error in understanding where the A V passes and fails, which grows with the bin size.Exhaustively testing all parameter bins [14, 15] is a widely adopted approach in industry. How-ever, this approach suffers from coarse discretization when testing high-dimensional scenarios. Toincrease sample efficiency, t-way testing [16] samples points such that for every subset of tpa-rameters, every combination is covered. This affords finer discretization but leads to inaccuraciesdue to assuming that tests which share a subset of tparameters yield the same result. Similarly,2Algorithm 1: Algorithm for GUARDInput: Threshold, threshold, budgetN, initial budget M, kernel params 0, update freq KG GP(0) // Initialize GPfortin 1, ..., M do // Initial exploration batcht argmax()G UpdateGPSamples (G;t;f(t))M argmaxP(f(1):::f(t)j1:::t;) // Fit kernel on initial batchG UpdateGPKernel (G;M)fortin M+1, ..., N do // Levelset samplingt argmax()j()j // Explore or close to boundaryG UpdateGPSamples (G;t;f(t))ift0 modKthen // Fit kernel every K stepst argmaxP(f(1):::f(t)j1:::t;)G UpdateGPKernel (G;t)P f 2j(()())g // Pass regionF f 2j(()())g // Fail regionU nPnF // Unknown regionreturnP;F;Ulevelset estimation [17, 18, 19] has also been applied to increase sample efficiency in discretizedcoverage-based testing [8]. On the other hand, [9] leverages adaptive discretization resolutions.2.2 Adversarial Example GenerationAdversarial example generation is a widely studied approach to generate difficult exaples [20, 21, 22,23, 24, 25], which have been applied to image classification [26], natural language processing [27],and self-driving [10]. In parameterized scenario testing, this is done by optimizing parameters viablack-box optimization algorithms such as Bayesian Optimization [10, 28, 29, 30, 31] or Reinforce-ment Learning [32, 33] to generate challenging safety-critical scenarios. These methods optimizefor the most adversarial and do not cover the parameter space. On the other hand, [34] showed thatBO with the GP-LCB acquisition function can validate that an A V passes all scenario variationsif the LCB is positive everywhere and a failure isn’t discovered. This achieves coverage but onlyin the case where the A V is perfect. Acquisition functions for adversarial example generation canalso be used to sample tests in our framework. However, this oversamples severe failures, whereasoversampling on the pass/fail boundary is more efficient for coverage-based testing.3 Scalable Coverage-Based TestingIn this section, we introduce our formulation which leverages probabilistic models to partition theparameter space into pass, fail, and unknown regions. Following our formulation, we introduceour testing framework GUARD that leverages Gaussian Processes (GPs), levelset estimation, andlearnable GP kernels. An overview can be found in Figure 1 and the algorithm in Algorithm 1.3.1 Problem FormulationThe A V software stack is often tested on parameterized scenarios commonly known as logical sce-narios [13, 6], which are the standard in the self-driving industry. Each logical scenario features aset ofdconfigurable parameters 2Rd, where each specific configuration of the parameters resultsin a concrete test. The scenario parameters are bounded [13, 15], i.e., i2[ai;bi], making the entireparameter search space a closed set inRd. We assume that a simulation test outputs a scalar mea-sure of safety (e.g. minimum distance to another agent). Mathematically, let f:A! Rbe thetest function which takes as input test parameters , autonomy system Aand outputs a real-valuedscalarf(;A). We omitAto usef()for brevity. In compliance with regulatory demand [7], abinary pass or fail y=1[f()]is computed using a threshold . It follows that we can denoteregions in the parameter space where the system passes and fails asP=f2;f()g: (1)F=f2;f()<g: (2)30.7 0.8 0.9 1.0Coverage0.600.650.700.750.800.85Balanced Accuracy0.7 0.8 0.9 1.0Coverage0.10.20.30.40.50.60.7Error Recall2-Way 3-Way Conformal Inference Grid GUARD HiddenGems0.7 0.8 0.9 1.0Coverage0.050.100.150.20False Positive RateFigure 2: GUARD significantly outperforms other testing frameworks. GUARD also a sweepableconfidence threshold to a trade-off between coverage and the other metrics.Then, the testing process can be thought of as sequencing a finite set of tests f1;:::; Ng, observ-ing their outcomes ff(1);:::;f(N)gand estimatingPP,FF. Estimating pass/failacross a continuous parameter domain using a finite set of test points inevitably leads to estimationerrors, especially if the number of concrete tests are limited (e.g. due to testing resource constraints).At the same time, estimation errors and false positive passes in particular can be detrimental to safety.Thus, we design an uncertainty-aware formulation where the framework quantifies the confidenceof its estimation. This allows the framework to say it is uncertain instead of predicting a pass/failoutcome with insufficient information. Specifically, we compute the probability that the system willpass or fail at any point in the parameter space. To achieve this, we model f()with a randomvariablef(). Under this random variable, the estimated pass and fail regions can be defined asP=f2;P(f())g:F=f2;P(f()<)g:(3)with respect to confidence threshold . It follows that the unknown region can be obtained asU=PF , where more information is required to make a prediction. Note that the size of PandFincrease monotonically as decreases. The configurable confidence threshold allows thetesting framework to control the tradeoff between the quality and quantity of predictions.3.2 Gaussian ProcessAs outlined in Equation 3, our formulation requires a probabilistic estimate f()for the ground truthtest function f(). We adopt GPs as they perform well in low-data regimes, making them suitablein our setting as the number of concrete tests are limited. Additionally, GPs are non-parametric andmake less assumptions about the function being modeled, which is important in allowing our testingframework to scale to a wide variety of logical scenarios.A GP is a collection of random variables in which every subset is assumed to distributed from amultivariate Gaussian. Let X= [1:::N]2RndN (;)be random variables from N testsamples and Y= [f(1):::f(N)]be the corresponding scalar outputs of f. To estimate testoutcomes, we model the distribution over the real-valued output of the metric P(f()). Specifically,the value of an unseen datapoint is estimated by the conditional posterior distributionP(f()j;X;Y )N(();2()) (4)() =k(;X)T(K+2I)1Y (5)2() =k(;)k(;X)T(K+2I)1k(;X): (6)Herek(;)is the kernel function which provides a similarity measure between two GP variables,k(;X) = [k(;i);:::;k (;N)],Kij=k(i;j), andmodels noise in the observed valuesY. Under the posterior distribution of f(), the probability of observing a value above or below acertain threshold can be computed. Specifically, the probability of observing a value greater than thepass-fail threshold isP(f()) = ()(): (7)where is the CDF of the normal distribution. The probability of observing a value under can becomputed in a similar fashion. An illustration of this can be seen in Figure 1.4Method Coverage(%) Bal. Acc(%) Pos Acc(%) Neg Acc(%) Err. Recall(%) FPR(%)Grid 87.7 0.4 62.30.1 97.40.03 27.20.2 25.00.3 18.90.22-way [16] 92.2 0.2 60.30.6 96.00.1 24.61.2 24.31.2 18.20.43-way [16] 92.4 1.0 59.71.7 96.20.3 23.23.7 22.53.6 20.91.3HiddenGems [8] 72.8 0.6 66.22.2 98.30.1 33.74.4 14.82.4 16.00.4Conformal Inference [9] 89.6 0.4 61.80.8 97.00.7 26.71.4 16.61.5 23.80.9GUARD 94.30.9 84.25.3 99.20.2 70.310.6 58.95.3 5.893.1Table 1: Benchmarking GUARD against baselines3.3 Test GenerationNote that precisely modelling fwhen the output is far from adds little value since it won’t affectthe pass or fail outcome. In contrast, it’s important to accurately model regions near the -levelset.Hence, we leverage levelset algorithms to efficiently upsample points near the boundary. Specif-ically, we adopt Straddle [35] as it directly integrates GPs, and alternative GP-levelset algorithmsoperate on finite input domains [17, 8, 18]. Straddle iteratively queries test points based on an explo-ration incentive promoting points with high variance and an exploitation incentive promoting pointsclose to the-levelset. Specifically, these two incentives are captured by the acquisition functionht() =t()jt()j (8)whereis a weighting coefficient balancing the first exploration term and the second exploita-tion term. At each iteration, we solve a lightweight maximization problem to find the next querypoint t= argmaxht(). We start by sampling Pinitial random points from . Then sinceht()is differentiable with respect to , the topQcandidates are improved via gradient-based opti-mization. This optimization is inexpensive since the cost of running the GP model and performingbackpropagation to evaluate ();()is negligible compared to running simulation to evaluatef(). Finally, the concrete test corresponding to tis executed in simulation to obtain f(t). Theobservation (t;f(t))is then added to the GP model to update the GP posterior.3.4 Automatic Kernel TuningIn addition to observed test points, the posterior distribution outlined in Equation 5 and 6 also dependheavily on the kernel function k(;). Thus, the estimates PandFare sensitive to the kernel functionparameters which we denote as . At the same time, tuning kernel parameters can be tedious andtime consuming [8]. Moreover, in our task, each individual scenario has different parameters withvarying scales and effects on the final output, requiring different kernels to accurately model differentscenarios. This makes manually tuning kernels infeasible for large scale testing.To circumvent this issue, we first normalize the scenario parameter space to the unit hypercube[0;1]dto address parameters with different scales. Furthermore, instead of manually tuned and fixedkernel parameters, we optimize the kernel parameters to maximize the marginal likelihood of theobservations. In particular, at iteration t, we update the kernel parameters towardst= argmaxP(f(1):::f(t)j1:::t;): (9)The marginal likelihood is differentiable with respect to the kernel parameters tand we performgradient-based optimziation every Kiterations. The kernel learning process can be prone to overfit-ting to a small initial batch of observations. Therefore, we perform an initial sampling step where wequeryMtests using only the variance term t()of the acquisition function outlined in Equation 8.4 ExperimentsWe first introduce the test scenarios, evaluation metrics, and relevant baselines. Then, we demon-strate the effectivness of GUARD and ablate our design choices. Finally, we show how GUARDcan be used in practice to benchmark autonomy systems and catch regressions. Additional experi-ments are included in Appendix A.4.1 Experimental SetupSimulation Test Scenarios: We use a suite of 10 logical scenarios designed by expert testing engi-neers and based on guidelines [36] prescribed by the National Highway Traffic Safety Association(NHTSA). The scenarios are executed in a high-fidelity closed-loop simulator and use a combina-tion of scripted and reactive actors. The scripted actors execute parameterized maneuvers to stress5Figure 3: 2D slice of 5D parameter space. GUARD performs better by avoiding discretization.test the A V - e.g. cut-ins. The reactive actors act as realistic background traffic and are controlledusing the Intelligent Driver Model [37]. Each scenario has d= 5parameters and descriptions of thescenarios are provided in Appendix B.We evaluate the PLT motion planner [38] using minimum distance(m) to collision as the safetymetric and choosing the safety threshold as = 1:0. We useN= 1000 test samples for experimentsunless otherwise specified. In the following experiments, we repeat 5 trials for each scenario tocollect mean and standard deviation of metrics. We report the average mean and standard deviationacross the 10 scenarios. To support large scale experimentation, we construct an offline dataset byevaluating each scenario space over a fine grid to create an approximation via linear interpolation.Metrics: To evaluate performance, we propose several metrics which capture important aspects oftesting for safety and autonomy development.•Coverage is the percentage of the search space that is covered by the estimates PandF:coverage =jP[Fjjj: (10)•Balanced Accuracy evaluates how accurate the estimates PandFare and also addresses classimbalance due to the fact that the scenario space is dominated by passes:balanced acc =jP\PjjPj\(jP[Fj )|{z}positive acc+jF\FjjFj\(jP[Fj )|{z}negative acc(11)•Error Recall measures the percentage of the failures the testing framework can recall, which isvery useful for autonomy development. This computed as:error recall =jF\FjjFj: (12)•False Positive Rate is the percentage of predicted passes that are incorrect, which is crucial tosafety as incorrectly predicting the system to be safe can lead to accidents. This is computed as:false positive rate =jP\FjjPj: (13)Note that some of these quantities such as jPjcannot be computed exactly. Therefore, we randomlysample 20;000points in the scenario parameter space to estimate the terms in the above equations.Baselines: We compare GUARD with several baselines outlined in Section 2. These include tradi-tional testing methodologies [16, 14, 15] and works proposed in the literature [8, 9].•Grid Search [15, 14] discretizes continuous parameters, and uses one discrete test point to repre-sent each bin. Given budget of Ntest samples which cannot cover all bins, we sample a subset ofthe bins at random. In our scenarios with d= 5dimensions, we use discretization resolution of 4.•T-way testing [16] leverages the assumption that most faults are caused by interactions of at mosttparameters. This method draws test samples such that for every t-parameter subset, all possi-ble combinations of parameter values can be found in at least one test. In our experiments, weconsidert= 2;3and choose a discretization resolution of 30fort= 2and12fort= 3.61 2 3 4 5Dimensions0.60.70.80.91.0Balanced Accuracy1 2 3 4 5Dimensions0.20.40.60.81.0Error Recall2-Way Conformal Inf Grid HiddenGems GUARDFigure 4: Scaling with scenario dimensionality.200 400 600 800 1000Num Samples0.60.70.80.9Balanced Accuracy200 400 600 800 1000Num Samples0.30.50.7Coverage2-Way Conformal Inf Grid HiddenGems GUARD Figure 5: Scaling with testing budget.•HiddenGems [8] employs levelset estimation algorithms to more efficiently infer pass/fail out-comes of the bins. The authors use a 33x33 discretization for a 2-D parameter space. However,this resolution becomes infeasibly expensive with d= 5 parameters. We adopt a resolution of 6and further analyze the exponentially increasing runtime in Appendix A.•Conformal Inference [9] uses conformal inference on samples in each bin to determine the out-come. Bin sizes are not fixed, since this method chooses how the parameter space is partitioned.Implementation Details: The initial exploration batch has size M= 200 . Our GP kernel is aproduct composition of separate Matern32 kernels [39] for each parameter dimension. The kernellengthscale is initialized to 0:02and variance initialized to the variance of the initial explorationbatch. The kernel is fitted every 50samples using Adam [40] with learning rate 0:01for100steps.Our acquisition function uses = 0:5. During acquisition maximization, we sample P= 600candidates improve the top Q= 30 candidates using Adam with learning rate 0:01for30steps.4.2 Experiment ResultsTesting Performance: We demonstrate the effectiveness of GUARD in Figure 2 where we plottest coverage versus balanced accuracy, error recall, and false positive rate (FPR). First, note thatwe can adjust the trade-off between coverage and the other metrics in GUARD by adjusting theconfidence threshold as outlined in Equation 3. This is useful in controlling how conservative weare in testing. Along the trade-off curve, we can observe how reliable the predictions are at differentprobability thresholds. Compared to other methods, GUARD performs better across all metrics.We provide numerical results in Table 1 where we set = 0:7for direct comparison.To illustrate how GUARD achieves better metrics, we visualize the outputs in Figure 3, using thetechnique proposed in [41] to visualize a 2D slice of the 5D parameter search space. Specifically,we sample one boundary point and multiple slices on the boundary point, choosing the slice withthe highest variance. On each slice we visualize the ground truth and predicted pass/fail regions.Compared to other methods, GUARD is more accurate as it’s not limited by discretization.Scalability: We evaluate GUARD against baselines on lower dimensional functions to analyzescalability with the number of scenario parameters. The 10 test scenarios have 5cparametersfixed to reduce the dimension to c. We increase the discretization resolution of baselines at lowerresolutions, choosing the resolutions to maintain a similar amount of bins as in the 5-D scenarios.Figure 4 shows that baselines perform well for 1-D and 2-D functions where they can afford a highresolution. As dimensionality increases the baselines degrade significantly compared to GUARD.Sample Efficiency: In Figure 5, we evaluate the models using a varying number of test samplesN= [200;400;600;800;1000] and show that GUARD outperforms the alternative methods withfewer test samples as well. With increasing testing budget, GUARD gradually increases in accuracyand coverage. Other approaches which rely on discretization grow in coverage. However, theiraccuracy is heavily limited by the discretization resolution and improves slowly or not at all.Ablations: We ablate some design choices of GUARD in Table 2. First, without the initial explo-ration phase, there is a noticeable drop across the metrics. Next, we consider removing online kernellearning, parameter normalization, and both. Removing either leads to drops in performance whileremoving both leads to degenerate solutions which predict pass everywhere. This is expected aswithout normalization or learnable lengthscales, a fixed lengthscale cannot model parameters withdifferent scales (i.e. velocity 2[20;30]m=s, road curvature2[0:002;0:002]). Finally, we ablatethe choice of the GP kernel. As described in Section 4.1, GUARD uses a product of Matern32kernels on each parameter because each parameter affects the function output differently. Using asingle kernel which doesn’t consider each parameter individually degrades performance as expected.7Exp Norm Learn Prod Coverage(%) Bal. Acc(%) Pos. Acc(%) Neg. Acc(%) Err. Recall(%) FPR (%)X X X X 94.30.9 84.25.3 99.20.2 70.310.6 58.95.3 5.93.1— X X X 92.21.9 82.44.9 99.40.2 65.410.0 52.18.8 7.53.3X — X X 93.21.2 83.72.3 99.00.2 68.34.6 56.34.4 5.81.6X X — X 92.30.8 84.11.9 98.70.2 69.73.8 48.63.9 6.21.0X — — X 0.040.03 – – – – –X X X — 93.31.9 81.44.1 99.00.2 63.88.3 51.68.1 7.31.9Table 2: Ablating design choices of GUARD4.3 Using GUARD In PracticeDifferent A V systems may have different failure modes and find different scenarios challeng-ing. Since GUARD is agnostic to the system under test, it can evaluate any A V systemto discover where it passes and fails across. This is useful during development to iden-tify regressions between two iterations of the A V . To demonstrate this, we use PLT as areference planner and introduce a variation adjusted to make the planner more aggressive .Planner Pass Rate Fail RateRef 69.8% 18.5%Agg 66.9% 24.0%Table 3: Comparing planners.First, we evaluate both planners on our test scenarios to measurethe aggregate pass and fail rates. As shown in Table 4.3, GUARDcorrectly identifies the aggressive planner as less safe. Further-more, we can triage the regressions by visualizing the pass/faillandscape which we show in Figure 6 below.Reference Planner Aggressive Planner Regression Regions Fail Pass Uncertain Regression Figure 6: Identifying regression regions - refer-ence planner passes, aggressive planner fails.A V (start) Actor (start) Actor (1.5s) A V (1.5s) Figure 7: Regression example. Top: Refer-ence. Bottom : aggressive planner collides.To aid autonomy development, we can sample any point from the regression region to yield a con-crete test. We show one example of such a test in Figure 7. In this test, the ego truck attempts tochange into a neighboring lane while actors from an on-ramp are merging. The aggressive plannerdoes not lane change safely which ultimately leads to a collision. Revealing all of these regressionsin an automatic and scalable way is invaluable to autonomy development.5 LimitationsThe scope of testing is limited since we assume the pass/fail result is thresholded on a single measureof safety, whereas it can be a combination of multiple measures. We also did not consider discretescenario parameters which can be categorical (e.g. vehicle type) or numerical (e.g. number of lanes).Future works can incorporate these considerations to build a more complete testing framework. Inaddition, GPs relies on the assumption that the landscape of the safety measure is smooth enough tobe modelled by the GP kernel. Investigating alternative probabilistic models such as neural GPs [42]and probabilistic SVMs [43] is another exciting direction. Finally, the impact of our work andsimulation testing in general is highly dependent on the fidelity of the simulator. Reducing thesim2real gap remains an open problem [44, 45, 46, 47].6 ConclusionThis paper tackles coverage-based testing of A Vs in parameterized scenarios. We formulate theproblem as sequencing a finite set of tests to estimate the probability of passing or failing across theentire parameter space. Based on this formulation, we propose a testing framework GUARD thatefficiently samples tests to cover the parameter space and accurately evaluate the A V’s performance.This framework can be used in practice with functional safety experts defining a comprehensiveset of safety requirements and a parameterized operational design domain (ODD). GUARD canautomatically generate concrete tests and validate the A V meets these requirements across the ODD.Our work contributes to streamlined autonomy development, safety validation, and is ultimately astep towards safely deploying autonomous vehicles.8AcknowledgementsThe authors would like to thank the anonymous reviewers for their valuable feedback and sugges-tions to improve the paper. We would also like to thank Paul Spriesterbach and Andre Strobel fortheir input on how our work relates to functional safety guidelines. Finally we would also like tothank Wenyuan Zeng and Sean Segal for their contributions in brainstorming and suggestions onimproving the manuscript.References[1] Iso 21448:2022 road vehicles — safety of the intended functionality. https://www.iso.org/standard/77490.html , 2022.[2] Iso 26262-1:2018 road vehicles — functional safety. https://www.iso.org/standard/68383.html , 2018.[3] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. Carla: An open urban drivingsimulator. In Conference on robot learning , pages 1–16. PMLR, 2017.[4] P. Kaur, S. Taghavi, Z. Tian, and W. Shi. A survey on simulators for testing self-driving cars. In2021 Fourth International Conference on Connected and Autonomous Driving (MetroCAD) ,pages 62–70. IEEE, 2021.[5] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXivpreprint arXiv:1604.07316 , 2016.[6] H. Weber, J. Bock, J. Klimke, C. Roesener, J. Hiller, R. Krajewski, A. Zlocki, and L. Eckstein.A framework for definition of logical scenarios for safety assurance of automated driving.Traffic injury prevention , 20(sup1):S65–S70, 2019.[7] (grva) proposal for the 01 series of amendments to un regulation no. 157 (automated lanekeeping systems).[8] A. Petrov, C. Fang, K. M. Pham, Y . H. Eng, J. G. M. Fu, and S. D. Pendleton. Hiddengems:Efficient safety boundary detection with active learning. arXiv preprint arXiv:2210.13956 ,2022.[9] C. Fan, X. Qin, Y . Xia, A. Zutshi, and J. Deshmukh. Statistical verification of autonomoussystems using surrogate models and conformal inference. arXiv preprint arXiv:2004.00279 ,2020.[10] J. Wang, A. Pun, J. Tu, S. Manivasagam, A. Sadat, S. Casas, M. Ren, and R. Urtasun. Advsim:Generating safety-critical scenarios for self-driving vehicles. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 9909–9918, 2021.[11] A. Nonnengart, M. Klusch, and C. M ̈uller. Crisgen: Constraint-based generation of criticalscenarios for autonomous vehicles. In International Symposium on Formal Methods , pages233–248. Springer, 2019.[12] M. O’Kelly, A. Sinha, H. Namkoong, R. Tedrake, and J. C. Duchi. Scalable end-to-end au-tonomous vehicle testing via rare-event simulation. Advances in neural information processingsystems , 31, 2018.[13] T. Menzel, G. Bagschik, and M. Maurer. Scenarios for development, test and validation ofautomated vehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1821–1827.IEEE, 2018.[14] MS Windows NT kernel description. https://www.foretellix.com/resources_post/coverage-driven-verification-for-ensuring-av-and-adas-safety-gtc-2020/ .Accessed: 2022-07-31.9[15] T. Zhao, E. Yurtsever, J. Paulson, and G. Rizzoni. Formal certification methods for automatedvehicle safety assessment. IEEE Transactions on Intelligent Vehicles , 2022.[16] D. R. Kuhn, R. N. Kacker, Y . Lei, et al. Practical combinatorial testing. NIST special Publica-tion, 800(142):142, 2010.[17] A. Gotovos. Active learning for level set estimation. Master’s thesis, Eidgen ̈ossische Technis-che Hochschule Z ̈urich, Department of Computer Science,, 2013.[18] A. Zanette, J. Zhang, and M. J. Kochenderfer. Robust super-level set estimation using gaussianprocesses. In Machine Learning and Knowledge Discovery in Databases: European Confer-ence, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part II 18 ,pages 276–291. Springer, 2019.[19] N. Paragios and R. Deriche. Geodesic active regions and level set methods for motion estima-tion and tracking. Computer vision and image understanding , 97(3):259–282, 2005.[20] D. Rempe, J. Philion, L. J. Guibas, S. Fidler, and O. Litany. Generating useful accident-pronedriving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17305–17315, 2022.[21] C. Yan, W. Xu, and J. Liu. Can you trust autonomous vehicles: Contactless attacks againstsensors of self-driving vehicle. Def Con , 24(8):109, 2016.[22] J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, and R. Urtasun.Physically realizable adversarial examples for lidar object detection. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13716–13725,2020.[23] A. Boloor, K. Garimella, X. He, C. Gill, Y . V orobeychik, and X. Zhang. Attacking vision-based perception in end-to-end autonomous driving models. Journal of Systems Architecture ,110:101766, 2020.[24] R. Duan, X. Ma, Y . Wang, J. Bailey, A. K. Qin, and Y . Yang. Adversarial camouflage: Hidingphysical-world attacks with natural styles. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 1000–1008, 2020.[25] J. Tu, H. Li, X. Yan, M. Ren, Y . Chen, M. Liang, E. Bitar, E. Yumer, and R. Urtasun. Explor-ing adversarial robustness of multi-sensor perception systems in self driving. arXiv preprintarXiv:2101.06784 , 2021.[26] S. N. Shukla, A. K. Sahu, D. Willmott, and Z. Kolter. Simple and efficient hard label black-boxadversarial attacks in low query budget regimes. In Proceedings of the 27th ACM SIGKDDconference on knowledge discovery & data mining , pages 1461–1469, 2021.[27] D. Lee, S. Moon, J. Lee, and H. O. Song. Query-efficient and scalable black-box adversarialattacks on discrete sequential data via bayesian optimization. In International Conference onMachine Learning , pages 12478–12497. PMLR, 2022.[28] B. Gangopadhyay, S. Khastgir, S. Dey, P. Dasgupta, G. Montana, and P. Jennings. Identifica-tion of test cases for automated driving systems using bayesian optimization. In 2019 IEEEIntelligent Transportation Systems Conference (ITSC) , pages 1961–1967. IEEE, 2019.[29] Y . Abeysirigoonawardena, F. Shkurti, and G. Dudek. Generating adversarial driving scenariosin high-fidelity simulators. In 2019 International Conference on Robotics and Automation(ICRA) , pages 8271–8277. IEEE, 2019.[30] S. Silvetti, A. Policriti, and L. Bortolussi. An active learning approach to the falsification ofblack box cyber-physical systems. In Integrated Formal Methods: 13th International Confer-ence, IFM 2017, Turin, Italy, September 20-22, 2017, Proceedings 13 , pages 3–17. Springer,2017.[31] T. Dreossi, A. Donz ́e, and S. A. Seshia. Compositional falsification of cyber-physical systemswith machine learning components. Journal of Automated Reasoning , 63:1031–1053, 2019.10[32] R. Lee, O. J. Mengshoel, A. Saksena, R. W. Gardner, D. Genin, J. Silbermann, M. Owen, andM. J. Kochenderfer. Adaptive stress testing: Finding likely failure events with reinforcementlearning. Journal of Artificial Intelligence Research , 69:1165–1201, 2020.[33] M. Koren, S. Alsaif, R. Lee, and M. J. Kochenderfer. Adaptive stress testing for autonomousvehicles. In 2018 IEEE Intelligent Vehicles Symposium (IV) , pages 1–7. IEEE, 2018.[34] S. Ghosh, F. Berkenkamp, G. Ranade, S. Qadeer, and A. Kapoor. Verifying controllers againstadversarial examples with bayesian optimization. In 2018 IEEE International Conference onRobotics and Automation (ICRA) , pages 7306–7313. IEEE, 2018.[35] B. Bryan, R. C. Nichol, C. R. Genovese, J. Schneider, C. J. Miller, and L. Wasserman. Ac-tive learning for identifying function threshold boundaries. Advances in neural informationprocessing systems , 18, 2005.[36] E. Thorn, S. C. Kimmel, M. Chaka, B. A. Hamilton, et al. A framework for automated drivingsystem testable cases and scenarios. Technical report, United States. Department of Trans-portation. National Highway Traffic Safety . . . , 2018.[37] A. Kesting, M. Treiber, and D. Helbing. Enhanced intelligent driver model to access the impactof driving strategies on traffic capacity. Philosophical Transactions of the Royal Society A:Mathematical, Physical and Engineering Sciences , 368(1928):4585–4605, 2010.[38] A. Sadat, M. Ren, A. Pokrovsky, Y .-C. Lin, E. Yumer, and R. Urtasun. Jointly learnablebehavior and trajectory planning for self-driving vehicles. In 2019 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 3949–3956. IEEE, 2019.[39] D. Duvenaud. Automatic model construction with Gaussian processes . PhD thesis, Universityof Cambridge, 2014.[40] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[41] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein. Visualizing the loss landscape of neuralnets. Advances in neural information processing systems , 31, 2018.[42] J. Lee, Y . Bahri, R. Novak, S. S. Schoenholz, J. Pennington, and J. Sohl-Dickstein. Deep neuralnetworks as gaussian processes. arXiv preprint arXiv:1711.00165 , 2017.[43] V . Franc, A. Zien, and B. Sch ̈olkopf. Support vector machines as probabilistic models. InProceedings of the 28th International Conference on Machine Learning (ICML-11) , pages665–672, 2011.[44] J. Wang, S. Manivasagam, Y . Chen, Z. Yang, I. A. B ˆarsan, A. J. Yang, W.-C. Ma, and R. Urta-sun. Cadsim: Robust and scalable in-the-wild 3d reconstruction for controllable sensor simu-lation. In 6th Annual Conference on Robot Learning , 2022.[45] Z. Yang, Y . Chen, J. Wang, S. Manivasagam, W.-C. Ma, A. J. Yang, and R. Urtasun. Unisim: Aneural closed-loop sensor simulator. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 1389–1399, 2023.[46] W. Zhao, J. P. Queralta, L. Qingqing, and T. Westerlund. Towards closing the sim-to-real gapin collaborative multi-robot deep reinforcement learning. In 2020 5th International conferenceon robotics and automation engineering (ICRAE) , pages 7–12. IEEE, 2020.[47] S. Suo, K. Wong, J. Xu, J. Tu, A. Cui, S. Casas, and R. Urtasun. Mixsim: A hierarchicalframework for mixed reality traffic simulation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 9622–9631, 2023.11A Additional ExperimentsAblation on GP Kernels: While GUARD requires minimal tuning of GP kernel parameters dueto online kernel learning, the typeof kernel is still a design choice. In GUARD we employ Matern32kernels to model each parameter and take the product of individual kernels to obtain the final kernel.In Table 4 we conduct an ablation that also considers 1) RBF kernels, 2) Using sum instead ofproduct, and 3) no composition, using one RBF or Matern32 kernel for all parameters. We first findthat without modeling each parameter separately, the results are poor. Also, using a sum operationto aggregate kernels is ineffective as well. The best results are obtained from a product of RBF orMatern32 kernels. We choose product of Matern32 as the lower false positive rate is critical forensuring safety. Furthermore, this choice also exhibits much lower variance in the metrics, makingit more reliable for testing.BO Acquisition Functions: As mentioned in Section 2 of the main manuscript, many adversar-ial example generation methods use Bayesian Optimization(BO) and employ acquisition functionsthat minimize some adversarial objective. Since BO also leverages GPs and active learning sim-ilar to GUARD, we can adopt these acquisition functions. In particular we use lower confidencebound (LCB), expected improvement (EI), and probability of improvement (PI). For EI and PI weadjust them to decrease the objective (expected decrease and probability of decrease). The resultsin Table 5 show that these alternatives can achieve strong results but is not as effective as using theStraddle acquisition function. This is because cost minimization oversample very negative pointsand Straddle oversamples boundary points, but the former is more useful in partioning pass / failregions. The performance is however very close, and we hypothesize this is because the parameterspace is dominated by positive (passes). With very small negative regions, severe negatives are closeto boundary points.2 4 6 8Resolution0500010000150002000025000Time (s)GUARDHiddenGemsFigure 8: RuntimeRuntime Comparison A major implementation detail regard-ing our baselines is the discretization resolution. For grid searchandt-way testing, there is a direct tradeoff between coverage andbalanced accuracy, error recall, false positive rate. Thus, we se-lect a resolution which can cover most of the parameter space.For HiddenGems, the resolution does not affect coverage sinceit uses a levelset estimation algorithm to determine which binsare covered. The limitation for our choice of resolution of 6isdue to how the computation budget increases exponentially withresolution. This is because the method requires running the GPon all discretized bins at every iteration. And while running theGP a single time is neglible compared to running simulation, theexponential scaling makes the GP queries a bottleneck at higherresolutions. As we show in Figure A, at this resolution the run-time roughly equals GUARD.Kernel Type Composition Coverage(%) Bal. Acc(%) Pos Acc(%) Neg Acc(%) Err. Recall(%) FPR(%)RBF - 96.10.8 80.25.7 98.10.5 62.411.2 56.010.6 10.63.3Matern32 - 87.31.5 78.54.2 99.30.2 57.78.4 36.87.7 6.151.1RBF Sum 88.06.7 58.63.2 89.111.4 28.211.5 21.412.0 20.25.7Matern32 Sum 90.43.3 58.43.3 88.010.7 28.713.7 23.612.7 21.03.1RBF Product 96.80.7 83.37.0 97.90.6 68.713.9 62.613.1 7.834.0Matern32 Product 94.30.9 84.25.3 99.20.2 70.310.6 58.95.3 5.893.1Table 4: Ablation of different kernel choices for GUARDAquisition Fn. Coverage(%) Bal. Acc(%) Pos Acc(%) Neg Acc(%) Err. Recall(%) FPR(%)PI 90.71.6 82.23.9 99.40.1 64.97.8 49.07.5 8.232.4EI 91.81.3 82.62.3 99.40.2 65.70.5 50.75.2 6.991.4LCB 91.51.5 83.41.8 99.60.1 67.33.6 52.44.7 5.991.4Straddle 94.30.9 84.25.3 99.20.2 70.310.6 58.95.3 5.893.1Table 5: Comparison of cost minimization acquisition functions versus Straddle acquistion function.12B Test ScenariosWe describe the 10 logical scenarios used in our experiments here:1.Actor cut-in : An actor starts in an adjacent lane to the SDV and performs a cut-in maneu-ver.2.Shoulder actor cut-in : The SDV drives next to the road shoulder. An actor on the shouldercuts in front of the SDV .3.Actor overtake cut-in : An actor starts behind the SDV , then rapidly accelerates to overtakethe SDV and cut-in front of it.4.Actors merging: The SDV is driving on a merge lane and there are multiple actors mergingin.5.Lead actor braking : An actor starts in front of the SDV and then brakes.6.Lane change : The SDV attempts to lane change when there is an actor in ithe target lane.7.Lane change beside merge : The SDV attempts to lane change into a merge lane and actorsare also merging into that lane.8.Lane merge : The SDV is merging from an on-ramp, and there is an actor in the merge lanethat is very close to the SDV .9.Lane merge multiple actors : The SDV is merging from an on-ramp and there are multipleactors in the merge lane,.10.Lane merge parallel on-ramp : The SDV is merging from a parallel on-ramp and there isan actor in the merge lane that is very close to the SDV .13 |
9bK38pUBzU | Language-Conditioned Path PlanningAmber Xie1Youngwoon Lee1Pieter Abbeel1Stephen James21University of California, Berkeley2Dyson Robot Learning Labhttps://amberxie88.github.io/lapp/Abstract: Contact is at the core of robotic manipulation. At times, it is desired (e.g.manipulation and grasping), and at times, it is harmful (e.g. when avoiding obsta-cles). However, traditional path planning algorithms focus solely on collision-freepaths, limiting their applicability in contact-rich tasks. To address this limitation,we propose the domain of Language-Conditioned Path Planning, where contact-awareness is incorporated into the path planning problem. As a first step in thisdomain, we propose Language-Conditioned Collision Functions ( LACO ), a novelapproach that learns a collision function using only a single-view image, languageprompt, and robot configuration. LACO predicts collisions between the robot andthe environment, enabling flexible, conditional path planning without the need formanual object annotations, point cloud data, or ground-truth object meshes. In bothsimulation and the real world, we demonstrate that LACO can facilitate complex,nuanced path plans that allow for interaction with objects that are safe to collide,rather than prohibiting any collision.Keywords: Robotic Manipulation, Path Planning, Collision Avoidance, LearnedCollision Function1 Introduction“Can collide with toy”Collision-free(a) w/o language“Can collide with toy”Collision-free (b) w/ languageFigure 1: Language-conditioned path plan-ning enables finding a path, e.g., toward acup, with acceptable collisions, e.g., a plushtoy(right) , whereas typical path planningfails at finding a collision-free path (left) .Collision checking is a fundamental aspect of path plan-ning in robotics [ 1,2,3,4,5,6,7], aiming to find apath between initial and target robot configurations thatavoids collisions with the environment. However, tra-ditional collision-free path planning approaches fallshort in scenarios where contact with the environmentis necessary, such as when manipulating objects or in-teracting with the surroundings. In such cases, the strict“collision-free” constraint becomes impractical and in-hibits the robot’s ability to perform tasks effectively.Traditional approaches for enabling contact in pathplanning [ 8,9] often require manual adjustments, suchas disabling collision checking for specific objects.However, these approaches rely on access to objectstate information, ground-truth object meshes, or extensive engineering efforts for each execution.This poses significant challenges, particularly in vision-based contact-rich robotic manipulation tasks.To overcome this limitation, we propose the domain of Language-Conditioned Path Planning(LAPP) , which integrates contact-awareness into the path planning problem. In this domain, pathplanning is not solely concerned with avoiding collisions but also incorporates the ability to makeinformed decisions about contact with the environment. This enables robots to perform complexmanipulation tasks that involve controlled interactions, such as holding a cup or opening a door.Figure 1 provides an illustration of a typical scenario where a robot encounters multiple obstaclesand needs to interact with the environment.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.To facilitate flexible and adaptive contact-aware path planning, we propose Language-ConditionedCollision Functions ( LACO )as an initial step in the language-conditioned path planning domain.LACO learns to predict a collision based on a single-view image, language prompt, and robotconfiguration. By predicting collisions between the robot and the environment modulated by thelanguage prompt, LACO enables the generation of path plans that can handle both desired andcontrolled collisions without requiring manual object annotations, point cloud data, or ground-truthobject meshes. This approach empowers robots to interact with objects that are safe to collide, ratherthan rigidly avoiding all collisions.In summary, our main contributions are threefold:•We propose a novel domain of Language-Conditioned Path Planning (LAPP) that integratessemantic language commands to enhance the robot’s understanding of how and what to interactwith in the environment. By fusing language instructions, we enable more intelligent andcontext-aware planning.•To enable language-conditioned path planning, we introduce Language-Conditioned CollisionFunctions ( LACO ), a collision function that incorporates language prompts to modulate collisionpredictions. LACO utilizes only a single-view camera, eliminating the need for object states orpoint clouds. This allows for easier application in real-world scenarios and facilitates zero-shotgeneralization to new language commands.•We provide comprehensive demonstrations of LACO ’s effectiveness in various path planningtasks. Our experiments include simulations and real-world scenarios, highlighting the practicalityand robustness of our language-conditioned collision function.2 Related WorkPath planning [1,2,3,4,5,6,7] finds a collision-free path between initial and target task con-figurations – representing robot states and potentially environment states – by querying a collisionfunction with states along a path. However, these collision-free path planning methods struggleat handling scenarios where collisions with the environment is desired. In this paper, we proposelanguage-conditioned path planning, which finds a contact-aware path following a language prompt.Semantic planning [10,11,12] builds a semantic map with a specifically-designed perceptionpipeline, consisting of, e.g., object detection, segmentation, and keypoint estimation modules. Thesemantic map is then used to find a collision-free path for navigation. In contrast, we propose anend-to-end language-conditioned collision function, which enables contact-aware path planning.The recent success in large language models enables training multi-task robotic policies guided bya language instruction [ 13,14,15] and high-level planning using a large language model [ 16,17].These language-conditioned “task” planning / solving approaches differ from our proposed language-conditioned “path” planning domain in that they directly solve a specific set of tasks while ourlanguage-conditioned path planning is agnostic to downstream tasks and can be used as a buildingblock for many robotics tasks.Acollision function is a fundamental component of path planning in robotics. However, the collisionfunction is assumed to be manually modelled or computed using state estimation. Instead of hand-engineering a collision function, many recent work have learned end-to-end motion planners [ 18,19,20] or collision functions [ 21,22,23] from synthetically generated data in simulation. To understand3D configuration of environments, these approaches use point clouds to represent the scenes. In thispaper, we propose to learn a collision function from only a single camera input, not requiring depthsensing nor precise camera calibration, which makes our method easily applied to many real-worldapplications. More importantly, our learned collision function conditions on language describingwhich objects to collide or not to collide, allowing acceptable or desired collisions for path planning.2LA COinitialta r g etLangua g e- c onditioned p ath p l anning (LA P P )s2s3s1co llision!ca ndid ate paths“pringles”T r a ns f ormerP r ediction N etCL IP V isionEncoder“pringles”s=[θ1 , ..,θD O F ]CL IP La ngEncoderML Pzv0zv196...zl0...zl0zl196zsco llision scor eLangua g e- c onditioned c o llis ion f unc tion (LA C O )Figure 2: A language-conditioned collision function ( LACO ),C(o, s, l), predicts whether a robotin a state scollides with objects other than collidable objects described in a language prompt lin ascene o, e.g., C(o, s,‘pringles’ ) = 1 . To find a language-conditioned path plan, a path planningalgorithm asks LACO whether any waypoint siof a path collides with objects except for the onesdescribed in l.3 Language-Conditioned Path PlanningIn a cluttered real-world environment, a collision-free path can be highly sub-optimal or impossibleto find. We introduce the problem domain of language-conditioned path planning, which extendstraditional path planning to allow safe or desired collisions in a path via language, in Section 3.1.Furthermore, we present a proof-of-concept framework for language-conditioned path planning thatfirst learns a language-conditioned collision function (Section 3.2) and leverages this learned collisionfunction for language-conditioned path planning (Section 3.3).3.1 Language-Conditioned Path Planning (LAPP)In robotics, a path planning problem is about finding a connected, collision-free path of robotconfigurations (i.e. waypoints), p= (s0, s1, . . . , s g), starting from an initial configuration s0to afinal configuration sgsuch that every configuration in the path has no collision with the environment,∀st∈p, C(ot, st) = 0 .otandstdenote the environment and robot configurations at time t,respectively, and C(o, s)denotes a collision function that outputs 1if the robot configuration shascollision with the environment o, and0, otherwise.In this paper, we propose a Language-Conditioned Path Planning (LAPP) problem, which relievesthe strict collision-free constraint in path planning so that path planning can manage safe or desiredcontacts with the world, especially guided by language . LAPP can be formulated as finding a pathpbetween two configurations, s0andsg, such that ∀st∈p, C(ot, st, l) = 0 , where lis a languageprompt modulating what needs to be considered as acceptable or desired collisions. For example, alanguage prompt lcan be “a robot can collide with plush toys” to specify safe-to-collide objects, asillustrated in Figure 1, or “a robot can grasp a mug” to support contact-rich tasks.3.2 Language-Conditioned Collision Function (LACO)For language-conditioned path planning, a path planning algorithm should understand which collisionsare acceptable and which must be avoided described in language. In this paper, we address thisproblem by adapting a collision function C(o, s)into a Language-Conditioned Collision Function(LACO )C(o, s, l), which takes a language prompt linto account. Specifically, LACO learns acollision function C(o, s, l), where ois a single-view image observation of the environment, sis a queried robot joint state, and lis a language instruction corresponding to objects that allowfor collisions. Note that odoes not need to correspond to sand represents only the environmentconfiguration. Thus, the same ocan be used across multiple robot configurations s.3(a) Training objects (b) Held-out objects (c) Real-world objectsFigure 3: We use ShapeNet objects [ 27] for our simulated experiments. (a) The training datasetof ShapeNet classes includes: airplane, chair, pot, vessel, laptop, bus, cap, and bottle. (b) Theheld-out evaluation dataset includes: basket, mug, train, bag, and can. (c) We also perform real-worldexperiments with YCB objects [28]: Spam, Cheez-it, Pringles, Windex, mustard, and bleach.We train C(o, s, l)on a dataset D={(o, s, l, yl, y)}, where yindicates whether the robot state shasany collision in the scene o; andylindicates whether there is a undesirable contact under the languageinstruction l. We optimize the cross-entropy losses for language-modulated collision prediction(target yl) and collision prediction without language conditioning (target y) as an auxiliary task:L=E(o,s,l,yl,y)∼DhCEyl, C(o, s, l)+CEy, C aux(o, s, l)i, (1)where Caux(o, s, l)is an additional MLP head attached to the last layer of C(o, s, l).To take advantage of large vision-language models pretrained on a large corpus [ 24,25],LACO usesthe vision and language encoders of CLIP [ 26] as the backbone networks. As illustrated in Figure 2,we tokenize an input image oof size 256×256with the frozen pretrained CLIP ViT encoder and alanguage prompt lwith the frozen pretrained CLIP language model. We then get 197visual tokenszv0:196 and197language tokens zl0:196. For a robot state s, we use a 3-layer MLP to embed it to asingle state token zs. All these tokens are then fed into a 2-layer transformer and the average of thetransformer output tokens is used to predict the collision probabilities with two separate MLP heads,C(o, s, l)andCaux(o, s, l). More hyperparameters are described in Appendix, Table 7.3.3 Path Planning using LACOFinally, language-conditioned path planning can be performed by simply replacing a collision checkerin any path planning algorithm with LACO . In this paper, we implement LAPP using an optimization-based method, LAPP-TrajOpt [ 7]. Whenever LACO needs a binary output (collision or not), we applya threshold of 0.5to the collision probability output of LACO. LACO is flexible to various stylesof path planning algorithms, such as sampling-based planning. We use a custom implementation ofTrajOpt [7] and the hyperparameters for TrajOpt can be found in Appendix, Table 8.4 ExperimentsIn this paper, we introduce language-conditioned path planning (LAPP), and present a framework thatcombines existing path planning algorithms and our proposed language-conditioned collision function(LACO ) as an initial step. Our evaluations are twofold: (1) we present a thorough investigation ofLACO ’s performance in object- and language-level generalization, and (2) we showcase the potentialof LACO in the language-conditioned path planning domain.4.1 Environment SetupsWe use the UFACTORY xArm7, a low-cost 7-DOF robotic arm, and an Intel RealSense D435 camera.For the real world, we use 6YCB objects [ 28]: Spam, Cheez-it, Pringles, Windex, mustard, andbleach. For simulation, we use ShapeNet v2 objects in CoppeliaSim [ 27,29], which provides ataxonomy of diverse, realistic set of 3D meshes with their labels, as shown in Figure 3.44.2 Data CollectionFigure 4: The real-world (left) and simulated(right) environments.To train LACO, we first collect data in both the sim-ulation and real-world environments.Simulation dataset. We use PyRep [ 30] based onCoppeliaSim [ 31] to synthetically generate a diverse,language-annotated dataset in simulation. Each sceneincludes 2-5randomly chosen objects in randomposes on the table. Instead of randomly initializ-ing a robot pose, a set of robot poses are generatedby the built-in RRT* motion planner [ 6] for eachscene. These smooth trajectories bias the dataset to-ward joint states likely to be queried by a path planner. We use the built-in collision checker forthe ground truth collision label y. Next, we sample combinations of 0, .., N−1objects in a scenewithNobjects to generate language annotations (a list of ShapeNet names of the sampled objects)and compute language-conditioned collisions yl. We generated 5000 unique scenes, which consistof unique combinations and positions of objects. Each scene contains about 40joint states and 10language annotations.Real-world dataset. Learning a collision function from real-world data poses additional challenges:the increased visual complexity and the difficulty of collecting collision data. To address theseissues, we train LACO on a dataset collected from a domain-randomized twin simulator environment(Figure 4) and then finetune on a small real-world dataset. We collect data from 20real-world scenesand500domain-randomized simulation scenes. For each scene, we extract 20-30images with domainrandomization in simulation and camera perturbations in the real world. Similar to our simulationdataset, we vary the number of objects in each scene from 3to5and vary positions of objects.4.3 Collision Prediction Results Across Language ConditioningTable 1: We evaluate the language-conditioned collisionprediction accuracy of LACO in simulation. For reference,we also evaluate SceneCollisionNet [ 21], which does notsupport language conditioning and “Built-In Collision” refersto the ground-truth collision checker.Accuracy Per # Conditioned Objs (%)Method 0 1 2 3 4Built-In Collision 100.0 - - - -SceneCollisionNet 67.0 - - - -LACO 82.9 78.9 77.18 82.6 72.2LACO possesses the ability to be mod-ulated by language. In particular, anynumber of objects in the scene canbe included in the language condition,allowing for flexibility in path plan-ning. This ability is not found in built-in collision checkers or learned col-lision checkers, such as SceneColli-sionNet [ 21], which are agnostic todesired and undesired collisions.We evaluate the performance ofLACO across different number of ob-jects included in the language condition in Table 1. To measure collision prediction accuracy, wesample 10trajectories with in total N≈2000 states, and evaluate 1/NPNi=1[yl(i)= 1C(oi,si,li)>0.5].We find that LACO is robust to different numbers of conditioned objects, though it performs bestwhen not conditioned on any objects. In this special case, LACO becomes a typical collision checker,without need to understand the semantics of the environment objects.Even in the unconditional case, our method outperforms SceneCollisionNet [ 21], a point-cloud-basedlearned collision function. We use the official implementation of SceneCollisionNet, which is trainedon a different simulator dataset. This distribution shift may be a reason for its poor performance in ourenvironment. While our primary contribution is presenting a new paradigm of collision checking andpath planning, the accuracy of our RGB-only method shows promise of collision checking withoutextensive camera setups and point clouds.54.4 Generalization ExperimentsOne advantage of LACO is its ability to be modulated by language, which is flexible, abstract, andsimple. We may expect that with its pretrained vision-language backbone, LACO may generalize toobjects and language that are unseen in the training dataset.Generalization to unseen language. We first evaluate generalization to unseen instructions of theseen objects. We compare the language-conditioned collision prediction accuracy with the originalobject name ( Default ), unseen synonyms of the object ( Synonym ), complex phrases describing theobject ( Description ), and correct and incorrect colors ( Color ). For example, a collision to “hat”(Default) is evaluated with “beanie” for Synonym, “head-covering accessory” for Description, and“blue hat” for Color. For Color, we also use incorrect colors, e.g., “white hat”, which ask LACO toignore the objects. The exhaustive list of such variations can be found in Appendix, Table 9.Table 2: We evaluate generalization of LACO toinstructions with unseen language.Default Synonym Description Color78.9 63.9 71.4 77.0In Table 2, we find strong generalization for Color ,where the language references the seen class name.LACO also generalizes to Description , while Syn-onym leads to 15.0%accuracy drop. One hypoth-esis is that the descriptions, which are typicallylonger and contain many keywords relating to theobject, may be more informative than just a synonym. Furthermore, short language conditions, like“cap,” may even be ambiguous, as “cap” may refer to a bottle cap, a hat, or more. This suggestspromise in future work of exploring stronger and more descriptive annotations, as we limit ourlanguage conditions to ShapeNet class names.Table 3: Evaluation of LACO’s generalization tonew objects.Seen Class Unseen ClassSeen Object Unseen Object Unseen Object78.9 70.6 54.7Generalization to unseen objects. We evalu-ate generalization to unseen objects across seenand unseen classes by measuring the collisionprediction accuracy for language-conditioningon a single unseen object.LACO achieves comparable collision predictionaccuracy for unseen objects from seen classes, showing its strong generalization due to the pretrainedvision encoder. However, LACO shows 24.2%lower accuracy for objects from unseen classes.Unlike the generalization to unseen language alone in Table 2, generalization to the unseen classes ismore challenging as it has both class names and objects unseen during training.4.5 Ablation StudiesTable 4: Ablation on Single and Multi-View Encoders.Accuracy Per # Conditioned Objs (%)Method 0 1 2 3 4LACO 82.9 78.9 77.2 82.6 72.2Finetuning 71.5 64.9 65.14 72.9 68.9From scratch 82.1 72.7 77.7 75.3 71.1LACO + MV 68.8 69.4 76.3 73.4 68.9Finetuning + MV 74.2 66.4 62.6 75.6 68.9From scratch + MV 66.0 67.9 67.1 78.6 72.7Pretrained observation encoder.We investigate the benefit of usingpretrained encoder by comparing aCLIP pretrained vision encoder anda CLIP vision encoder trained fromscratch. In Table 4, we find that a pre-trained CLIP encoder consistently out-performs the one trained from scratch.Multiview camera inputs. To extend to multi-camera RGB observations, we train a multi-viewMAE to replace the CLIP vision encoder. The multi-view MAE is trained end-to-end to predict imagereconstructions of two fixed camera views. Unlike MV-MAE [ 32] and Multi-MAE [ 33], we keep theoriginal MAE masking ratio of 80% per each view. When objects are entirely masked out from oneview, we find that they can be reconstructed if present in the second view. Sample reconstructions andhyperparameters are included in Appendix C. In Table 4, we find that using a pretrained MV-MAE,whether frozen or with finetuning, outperforms training from scratch. However, multi-view featureextraction remains an open problem, as single-view features lead to stronger predictions.6Table 5: The dataset size ablation examines how perfor-mance varies as we alter the size of the dataset.Accuracy Per # Conditioned Objs (%)Dataset Size 0 1 2 3 450% 82.9 74.2 73.4 78.0 73.680% 81.2 70.0 72.6 72.2 65.8100% 82.9 78.9 77.2 82.6 72.2Dataset size. Although Section 4.4shows the generalization capability ofLACO, its generalization to unseen classesof objects is This may arise because weare training on a limited dataset of objects.As the quality and quantity of 3D assetsincreases, we may expect improved perfor-mance by training on more than a limitedset of classes. We verify our hypothesis byvarying the dataset size in Table 5. The results show that there is an improvement with the increasedsize of the dataset, but the improvement is marginal.4.6 Language-Conditioned Path Planning DemonstrationsIn this section, we showcase three tasks using language-conditioned path planning with LACO:•Reach Target (No Lang): This task resembles a traditional path planning task, which aims toreach a target joint pose while avoiding obstacles. We do not condition on language.•Reach Target (1 Lang): The objective is likewise to reach a target; however, 1object is specifiedas collidable, allowing for more flexibility in plans.•Push Object (1 Lang): The objective is to push an object forward. For this task, collisions arein fact desired, showcasing the usefulness of LAPP.Table 6: We evaluate the success rates ofLAPP-TrajOpt on three path planning tasks.No Lang 1 LangReach Target Reach Target Push Object7/10 8/10 9/10We report the success rates of LAPP in Table 6. Eachpath planning is considered successful if a valid pathis found and the path reaches a target or pushesan object without undesirable collisions. The pushobject task is particularly well-suited for language-conditioned path planning: TrajOpt can be initializedwith a trajectory passing through the object and op-timize collision constraints with other objects. Reaching targets and pushing objects can also becomposed to perform more complex tasks, such as avoiding all obstacles before reaching the objectthe arm needs to push.4.7 Real-World ExperimentsWe show real-world trajectories with LAPP-TrajOpt in Figure 5. LAPP-TrajOpt with LACO , pre-trained on simulator data and finetuned with real-world data, is able to find plans in clutteredenvironments, using the language condition to discover a path that would otherwise be regarded as atrajectory with collisions.5 LimitationsWhile LAPP with LACO offers promising advancements in contact-aware path planning, there areseveral limitations that should be acknowledged:Lack of environment dynamics. LACO does not explicitly consider environment dynamics. Oncean object is hit, it may react by being pushed or knocked down, potentially affecting the configu-ration and positions of other nearby objects. This restricts the ability of LAPP to handle dynamicenvironments and may lead to suboptimal or unsafe path plans when objects move significantly.Limited language prompt scope. In our experiments, the language prompt is limited to specifyingobjects that are desirable or safe to collide. While this provides valuable control over contactconditions, the current scope of language prompts may not cover the full range of instructions7Can c ollide with Pringles and Chee z -It bo x .Can c ollide with Pringles.Can c ollide with sp am.Figure 5: We demonstrate successful execution of the LAPP-TrajOpt path planning algorithm in thereal-robot system. A path to the goal, specified by the blue jar, is blocked by diverse objects on thetable, so the robot needs to collide with some object. We inform which object is safe and desired tocollide using the language prompts: “pringles, cheezit”, “pringles”, and “spam”, respectively.or interactions that a user may desire. Including a wider variety of language instructions andspecifications could enhance the versatility and adaptability of LAPP and LACO.Data generation efforts. The training process for LACO relies on a combination of syntheticsimulation data and manually collected real-world data. Both require significant human engineer-ing and labeling efforts. Exploring advances in 3D asset availability [ 34] and simulation-to-realtechniques [ 35,36,37,38] could alleviate this limitation and enable more efficient training of LACO .6 ConclusionIn conclusion, our proposed domain of Language-Conditioned Path Planning (LAPP) addresses thelimitations of traditional collision-free path planning in contact-rich robotic manipulation tasks. Byintegrating contact awareness into path planning, LAPP allows robots to make informed decisionsabout contact with the environment, enabling them to perform complex manipulation tasks effectively.As a first step towards LAPP, we propose to use Language-Conditioned Collision Functions ( LACO ),which learns to predict collisions based on visual inputs, language prompts, and robot configurations.This learned collision function eliminates the need for manual object annotations, point cloud data,or ground-truth object meshes, enabling flexible and adaptive path planning that incorporates bothdesired and controlled collisions.AcknowledgmentsWe would like to thank Justin Kerr, Chung Min Kim, Younggyo Seo, and Hao Liu for their insightfuladvice on leveraging pretrained visual-language models and training transformer models. In addition,we thank Philipp Wu for assistance with the real-world experiments and Oleh Rybkin for feedback.This work was funded in part by Darpa RACER, Komatsu, and the BAIR Industrial Consortium.8References[1]M. Overmars. A random approach to motion planning. Technical Report RUU-CS-92-32,Department of Computer Science, Utrecht University, 1992.[2]L. Kavraki and J.-C. Latombe. Randomized preprocessing of configuration for fast path planning.InIEEE International Conference on Robotics and Automation , 1994.[3]N. M. Amato and Y . Wu. A randomized roadmap method for path and manipulation planning.InIEEE International Conference on Robotics and Automation , 1996.[4]S. M. Lavalle. Rapidly-exploring random trees: A new tool for path planning. Technical report,Iowa State University, 1998.[5]J. J. Kuffner and S. M. LaValle. Rrt-connect: An efficient approach to single-query pathplanning. In IEEE International Conference on Robotics and Automation , volume 2, pages995–1001, 2000.[6]S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. TheInternational Journal of Robotics Research , 30(7):846–894, 2011.[7]J. Schulman, J. Ho, A. X. Lee, I. Awwal, H. Bradlow, and P. Abbeel. Finding locally optimal,collision-free trajectories with sequential convex optimization. In Robotics: Science andSystems , 2013.[8]F. Burget, A. Hornung, and M. Bennewitz. Whole-body motion planning for manipulationof articulated objects. In IEEE International Conference on Robotics and Automation , pages1656–1662, 2013.[9]S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, and P. Abbeel. Combined task and motionplanning through an extensible planner-independent interface layer. In IEEE InternationalConference on Robotics and Automation , pages 639–646, 2014.[10] Y . Kantaros, S. Kalluraya, Q. Jin, and G. J. Pappas. Perception-based temporal logic planningin uncertain semantic maps. IEEE Transactions on Robotics , 38(4):2536–2556, 2022.[11] V . Vasilopoulos, G. Pavlakos, S. L. Bowman, J. D. Caporale, K. Daniilidis, G. J. Pappas, andD. E. Koditschek. Reactive semantic planning in unexplored semantic environments using deepperceptual feedback. IEEE Robotics and Automation Letters , 5(3):4455–4462, 2020.[12] Y . Liang, B. Chen, and S. Song. Sscnav: Confidence-aware semantic scene completion forvisual semantic navigation. In IEEE International Conference on Robotics and Automation ,pages 13194–13200. IEEE, 2021.[13] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipula-tion. In Conference on Robot Learning , 2021.[14] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learning ,pages 991–1002, 2021.[15] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Julian,D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1:Robotics transformer for real-world control at scale. In Robotics: Science and Systems , 2023.9[16] A. Brohan, Y . Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang,R. Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. InConference on Robot Learning , 2022.[17] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. In IEEE International Conferenceon Robotics and Automation , 2023.[18] A. H. Qureshi, A. Simeonov, M. J. Bency, and M. C. Yip. Motion planning networks. In IEEEInternational Conference on Robotics and Automation , pages 2118–2124, 2019.[19] A. H. Qureshi, Y . Miao, A. Simeonov, and M. C. Yip. Motion planning networks: Bridging thegap between learning-based and classical motion planners. IEEE Transactions on Robotics , 37(1):48–66, 2020.[20] H. Ha, J. Xu, and S. Song. Learning a decentralized multi-arm motion planner. In Conferenceon Robot Learning , pages 103–114, 2021.[21] M. Danielczuk, A. Mousavian, C. Eppner, and D. Fox. Object rearrangement using learnedimplicit collision functions. In IEEE International Conference on Robotics and Automation ,pages 6010–6017, 2021.[22] A. Murali, A. Mousavian, C. Eppner, A. Fishman, and D. Fox. Cabinet: Scaling neuralcollision detection for object rearrangement with procedural scene generation. arXiv preprintarXiv:2304.09302 , 2023.[23] D. Son and B. Kim. Local object crop collision network for efficient simulation of non-convexobjects in gpu-based simulators. In Robotics: Science and Systems , 2023.[24] H. Liu, L. Lee, K. Lee, and P. Abbeel. Instruction-following agents with multimodal transformer.arXiv preprint arXiv:2210.13431 , 2022.[25] C. Wang, M. Chai, M. He, D. Chen, and J. Liao. Clip-nerf: Text-and-image driven manipulationof neural radiance fields. In IEEE Conference on Computer Vision and Pattern Recognition ,pages 3835–3844, 2022.[26] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision.InInternational Conference on Machine Learning , pages 8748–8763, 2021.[27] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva,S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprintarXiv:1512.03012 , 2015.[28] B. Calli, A. Singh, A. Walsman, S. Srinivasa, P. Abbeel, and A. M. Dollar. The ycb object andmodel set: Towards common benchmarks for manipulation research. In 2015 internationalconference on advanced robotics (ICAR) , pages 510–517. IEEE, 2015.[29] A. Adeniji, A. Xie, C. Sferrazza, Y . Seo, S. James, and P. Abbeel. Language reward modulationfor pretraining reinforcement learning. arXiv preprint arXiv:2308.12270 , 2023.[30] S. James, M. Freese, and A. J. Davison. Pyrep: Bringing v-rep to deep robot learning. arXivpreprint arXiv:1906.11176 , 2019.[31] E. Rohmer, S. P. Singh, and M. Freese. V-rep: A versatile and scalable robot simulationframework. In IEEE/RSJ International Conference on Intelligent Robots and Systems , pages1321–1326, 2013.[32] Y . Seo, J. Kim, S. James, K. Lee, J. Shin, and P. Abbeel. Multi-view masked world models forvisual robotic manipulation. In International Conference on Machine Learning , 2023.10[33] R. Bachmann, D. Mizrahi, A. Atanov, and A. Zamir. MultiMAE: Multi-modal multi-taskmasked autoencoders. In European Conference on Computer Vision , 2022.[34] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani,A. Kembhavi, and A. Farhadi. Objaverse: A universe of annotated 3d objects, 2022.[35] S. James, A. J. Davison, and E. Johns. Transferring end-to-end visuomotor control fromsimulation to real world for a multi-stage task. In Conference on Robot Learning , pages334–343. PMLR, 2017.[36] J. Matas, S. James, and A. J. Davison. Sim-to-real reinforcement learning for deformable objectmanipulation. In Conference on Robot Learning , pages 734–743. PMLR, 2018.[37] S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell,and K. Bousmalis. Sim-to-real via sim-to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 12627–12637, 2019.[38] J. So, A. Xie, S. Jung, J. Edlund, R. Thakker, A. Agha-mohammadi, P. Abbeel, and S. James.Sim-to-real via sim-to-seg: End-to-end off-road autonomous driving without real data. InConference on Robot Learning , 2022.11A Implementation DetailsTable 7: LACO hyperparameters.Hyperparameter ValueLearning rate 3e-5Learning rate scheduler cosine decay to 1e-7# Mini-batches 32Training steps 300000State tokenizer hidden units (4096, 4096, 4096)Prediction net hidden units (512, 256)Observation tokenizer/encoder CLIP B/16Language tokenizer/encoder CLIP B/16# Attention layers 4# Attention heads 16# Token dimension 768Table 8: TrajOpt hyperparameters.Hyperparameter Value# Steps 10Velocity constraint [-0.4, 0.4]μ0 2s0 0.01c 0.75τ+1.1τ−0.5k 10ftol 0.0001xtol 0.0001ctol 0.01Solver ECOS# Penalty iterations 5# Convexify iterations 5# Trust iterations 2Min. trust box size 0.0001B Language Prompts for EvaluationLanguage prompts are included in Table 9.Table 9: Language prompts for evaluation.Original Noun Synonym Descriptionplanter plant stand bin for plantscap hat, snapback head-covering accessory, head-covering article of clothingboat sailboat, cruise ship oceanic vehicleyachting cap hat, sailor hat head-covering accessoryairplane aircraft, airline aerial vehicle, object that takes flightomnibus vehicle long vehicle for travel, toy with wheelsbottle water bottle, bottle container container for fluids, travel-sized water containerflat cap hat, beanie head-covering accessorylaptop computer electronic device, laptop device for accessing internet, typing device for workColors red, yellow, blue, purple, pink, gray, black, white12C Multi-View MAEIn addition to single-view observations, we also experiment with multi-view observations. Instead ofusing the pre-trained CLIP encoder, we pretrain a multi-view MAE from scratch on the simulatorimages. Then, we use multi-view features from the frozen multi-view MAE model for collisionprediction.In particular, we do not apply any special masking strategies. Though recent works in multi-viewMAEs [ 33,32] have applied such strategies, we find that even a basic MAE strategy leads to goodreconstruction, even of parts occluded in just one view. For instance, in Figure 6, the blue object iscompletely masked out of the second view, yet the second view is able to successfully reconstruct theobject.We use an encoder with 2layers and 16heads, a decoder with 2layers and 16heads, token dimensionof768, patch size of 16, and masking ratio of 80%. Our image is preprocessed to be ( 224,224),following convention. The learning rate is 3·10−5. We add learned embeddings to the tokens foreach view.The result in Table 4 shows that multi-view LACO is worse than single-view LACO. We hypothesizethat the use of pre-trained CLIP encoder is crucial for extracting useful features for collision prediction.However, we believe if the multi-view MAE is trained with large web-scale data, it can outperformsingle-view LACO.Figure 6: We pretrain a multi-view MAE that is trained from scratch on a simulator images. kis thenumber of unmasked tokens passed into each view. Note that the blue object is completely maskedout of the bottom view, yet it is able to be reconstructed.13 |
XEw-cnNsr6 | DATT: Deep Adaptive Trajectory Tracking forQuadrotor ControlKevin HuangUniversity of Washingtonkehuang@cs.washington.eduRwik RanaUniversity of Washingtonrwik2000@uw.eduAlexander SpitzerUniversity of Washingtonspitzer@cs.washington.eduGuanya ShiCarnegie Mellon Universityguanyas@andrew.cmu.eduByron BootsUniversity of Washingtonbboots@cs.washington.eduAbstract: Precise arbitrary trajectory tracking for quadrotors is challenging dueto unknown nonlinear dynamics, trajectory infeasibility, and actuation limits. Totackle these challenges, we present Deep Adaptive Trajectory Tracking (DATT),a learning-based approach that can precisely track arbitrary, potentially infeasibletrajectories in the presence of large disturbances in the real world. DATT builds ona novel feedforward-feedback-adaptive control structure trained in simulation us-ing reinforcement learning. When deployed on real hardware, DATT is augmentedwith a disturbance estimator using L1adaptive control in closed-loop, without anyfine-tuning. DATT significantly outperforms competitive adaptive nonlinear andmodel predictive controllers for both feasible smooth and infeasible trajectories inunsteady wind fields, including challenging scenarios where baselines completelyfail. Moreover, DATT can efficiently run online with an inference time less than3.2 ms , less than 1/4 of the adaptive nonlinear model predictive control baseline1.Keywords: Quadrotor, Reinforcement Learning, Adaptive Control1 IntroductionExecuting precise and agile flight maneuvers is important for the ongoing commoditization of un-manned aerial vehicles (UA Vs), in applications such as drone delivery, rescue and search, and urbanair mobility. In particular, accurately following arbitrary trajectories with quadrotors is among themost notable challenges to precise flight control for the following reasons. First, quadrotor dynamicsare highly nonlinear and underactuated, and often hard to model due to unknown system parameters(e.g., motor characteristics) and uncertain environments (e.g., complex aerodynamics from unknownwind gusts). Second, aggressive trajectories demand operating at the limits of system performance,requiring awareness and proper handling of actuation constraints, especially for quadrotors withsmall thrust-to-weight ratios. Finally, the arbitrary desired trajectory might not be dynamically fea-sible (i.e., impossible to stay on such a trajectory), which necessities long-horizon reasoning andoptimization in real-time. For instance, to stay close to the five-star trajectory in Fig. 1, which isinfeasible due to the sharp changes of direction, the quadrotor must predict, plan, and react onlinebefore the sharp turns.Traditionally, there are two commonly deployed control strategies for accurate trajectory follow-ing with quadrotors: nonlinear control based on differential flatness and model predictive control1Videos and demonstrations in https://sites.google.com/view/deep-adaptive-traj-trackingand code in https://github.com/KevinHuang8/DATT .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: Trajectory visualizations for example infeasible trajectories. (a-c) Long-exposure photosof different methods for an equilateral triangle reference trajectory. (d) Long-exposure photo ofour method for a five-pointed star reference trajectory. (e) Quantitative comparisons between ourapproach and baselines for the five-pointed star. Numbers indicate the tracking error in meters.(MPC). However, nonlinear control methods, despite their proven stability and efficiency, are con-strained to differentially flat trajectories (i.e., smooth trajectories with bounded velocity, accelera-tion, jerk, and snap) satisfying actuation constraints [1, 2, 3]. On the other hand, MPC approachescan potentially incorporate constraints and non-smooth arbitrary trajectories [4, 5], but their perfor-mances heavily rely on the accuracy of the model and the optimality of the solver for the underlyingnonconvex optimization problems, which could also be expensive to run online.Reinforcement learning (RL) has shown its potential flexibility and efficiency in trajectory trackingproblems [6, 7, 8]. However, most existing works focus on tracking smooth trajectories in stationaryenvironments. In this work, we aim to design an RL-based flight controller that can (1) follow feasi-ble trajectories as accurately as traditional nonlinear controllers and MPC approaches; (2) accuratelyfollow arbitrary infeasible and dynamic trajectories to the limits of the hardware platform; and (3)adapt to unknown system parameters and uncertain environments online. Our contributions are:• We propose DATT, a novel feedforward-feedback-adaptive policy architecture and trainingpipeline for RL-based controllers to track arbitrary trajectories. In training, this policy is condi-tioned on ground-truth translational disturbance in a simulator, and such a disturbance is estimatedin real using L1adaptive control in closed-loop;• On a real, commercially available, lightweight, and open-sourced quadrotor platform (Crazyflie2.1 with upgraded motors), we show that our approach can track feasible smooth trajectorieswith 27%-38% smaller errors than adaptive nonlinear or adaptive MPC baselines. Moreover, ourapproach can effectively track infeasible trajectories where the nonlinear baseline completely fails,with a 39% smaller error than MPC and 1/4th the computational time;• On the real quadrotor platform, we show that our approach can adapt zero-shot to unseen turbulentwind fields with an extra cardboard drag plate for both smooth desired trajectories and infeasibletrajectories. Specifically, for smooth trajectories, our method achieves up to 22% smaller errorsthan the state-of-the-art adaptive nonlinear control method. In the most challenging scenario (in-feasible trajectories with wind and drag plate), our method significantly outperforms the adaptiveMPC approach with 15% less error and 1/4th of the computation time.22 Problem Statement and Related Work2.1 Problem StatementIn this paper, we let ̇xdenote the derivative of a continuous variable xregarding time. We considerthe following quadrotor dynamics: ̇p=v, m ̇v=mg+Re3fΣ+d (1a) ̇R=RS(ω), J ̇ω=Jω×ω+τ, (1b)where p,v,g∈R3are position, velocity, and gravity vectors in the world frame, R∈SO(3) is theattitude rotation matrix, ω∈R3is the angular velocity in the body frame, m,Jare mass and inertiamatrix, e3= [0; 0; 1] , and S(·) :R3→so(3) maps a vector to its skew-symmetric matrix form.Moreover, dis the time-variant translational disturbance, which includes parameter mismatch (e.g.,mass error) and environmental perturbation (e.g., wind perturbation) [9, 10, 11, 12]. The controlinput is the total thrust fΣand the torque τin the body frame. For quadrotors, there is a linearinvertible actuation matrix between [fΣ;τ]and four motor speeds.We let xtdenote the temporal discretization of xat time step t∈Z+. In this work, we focus on the3-D trajectory tracking problem with the desired trajectory pd1,pd2,···,pdT, with average trackingerror as the performance metric:1TPTt=1∥pt−pdt∥. We do not have any assumptions on the desiredtrajectory pd. In particular, pdis not necessarily differentiable or smooth.2.2 Differential FlatnessThe differential flatness property of quadrotors allows efficient generation of control inputs to followsmooth trajectories [1, 5]. Differential flatness has been extended to account for unknown linear dis-turbances [3], learned nonlinear disturbances [13], and also to deal with the singularities associatedwith pitching and rolling past 90 degrees [14]. While differential-flatness-based methods can showimpressive performance for smooth and aggressive trajectories, they struggle with nondifferentiabletrajectories or trajectories that require reasoning about actuation constraints.2.3 Model Predictive Control (MPC)MPC is a widely used optimal control approach that online optimizes control inputs over a finitetime horizon, considering system dynamics and constraints [15, 16].Model Predictive Path Integral Control (MPPI) [4, 17] is a sampling-based MPC incorporating pathintegral control formulation and stochastic sampling. Unlike deterministic optimization, MPPI em-ploys a stochastic optimization approach where control sequences are sampled from a distribution.These samples are then evaluated based on a cost function, and the distribution is iteratively updatedto improve control performance. Recently MPPI has been applied to quadrotor control [18, 19].Gradient-based nonlinear MPC techniques have been widely used for rotary-winged-based flyingrobots or drones. Hanover et al. [12] and Sun et al. [5] have shown good performance of nonlinearMPC in agile trajectory tracking of drones and adaptation to external perturbations. Moreover, thesetechniques are being used for vision-based agile maneuvers of drones [20, 7].However, for either sampling-based or gradient-based MPC, the control performance heavily relieson the optimality of the optimizer for the underlying nonconvex problems. Generally speaking,MPC-based approaches require much more computing than differential-flatness-based methods [5].Moreover, MPC’s robustness and adaptability for infeasible trajectories remain unclear since exist-ing works consider smooth trajectory tracking. In this paper, we implemented MPPI [4] and L1augmented MPPI [18] for our baselines.2.4 Adaptive Control and Disturbance EstimationAdaptive controllers aim to improve control performance through online estimation of unknownsystem parameters in closed-loop. For quadrotors, adaptive controllers typically estimate a three-dimensional force disturbance d[21, 10, 22, 23, 18]. Most recently, L1adaptive control for quadro-tors [11] has been shown to improve trajectory tracking performance in the presence of complex andtime-varying disturbances such as sloshing payloads and mismatched propellers. Recently, deep-learning-based adaptive flight controllers have also emerged [10, 24, 25].3state feedback errorin body frame ref trajectory feedforwardencoder feedforwardembedding disturbanceestimation nominal model adaptivecontrolpolicy net low-levelcontrollerdesired and Figure 2: Algorithm Overview. Blue, yellow, and green blocks represent feedforward, feedback, andadaptation modules respectively. In training the policy has access to the true disturbance dwhereasin real we use L1adaptive control to get the disturbance estimation ˆdin closed-loop.Learning dynamical models is a common technique to improve quadrotor trajectory tracking per-formance [9, 26, 27, 28] and can provide more accurate disturbance estimates than purely reactiveadaptive control, due to the model of the disturbance over the state and control space. In this work,we use the disturbance estimation from L1adaptive control, but we note that our method can lever-age any disturbance estimation or model learning techniques.In particular, Rapid Motor Adaptation (RMA) is a supervised learning-based approach that aims topredict environmental parameters using a history of state-action pairs, which are then inputted to thecontroller [29]. This approach has been shown to work for real legged-robots, but we find that it canbe susceptible to domain shift during sim2real transfer on drones.2.5 Reinforcement Learning for Quadrotor ControlReinforcement learning for quadrotor stabilization is studied in [6, 30, 24]. Molchanov et al. [30]uses domain randomization to show policy transfer between multiple quadrotors. Kaufmann et al.[31] compares three different policy formulations for quadrotor trajectory tracking and finds thatoutputting body thrust and body rates outperforms outputting desired linear velocities and individualrotor thrusts. [31] only focuses on feasible trajectories while in this work, we aim to track infeasibletrajectories as accurately as possible. Simulation-based learning with imitation learning to an expertMPC controller is used to generate acrobatic maneuvers in [7]. In this work, we focus on trajectoriesand environments for which obtaining an accurate expert even in simulation is difficult or expensiveand thus use reinforcement learning to learn the controller.3 Methods3.1 Algorithm OverviewA high-level overview of DATT is given in Fig. 2. Using model-free RL, DATT learns a neural net-work quadrotor controller πcapable of tracking arbitrary reference trajectories, including infeasibletrajectories, while being able to adapt to various environmental disturbances, even those unseenduring training. We condition our policy on a learned feedforward embedding h, which encodesthe desired reference trajectory, in the body frame, over a fixed time horizon, as well as the forcedisturbance din Eq. (1).The state xtconsists of the position p, the velocity v, and the orientation R, represented as aquaternion q. We convert p,vto the body frame and input them to π. Our policy controller outputsuwhich includes the desired total thrust fΣ,des, and the desired body rates ωdes. In summary, ourcontroller functions as follows:ht=φ(R⊤t(pt−pdt)), . . . ,R⊤t(pt−pdt+H)) (2a)ut=π(R⊤tpt,R⊤tvt,qt,ht,R⊤t(pt−pdt),dt) (2b)4We define the expected reward for our policy conditioned on the reference trajectory as follows:J(π|pdt:t+H) =E(x,u)∼π"∞Xt=0r(xt,ut|pdt:t+H)#(3a)r(xt,ut|pdt:t+H) =∥pt−pdt∥+ 0.5∥ψt∥+ 0.1∥vt∥ (3b)ψtdenotes the yaw of the drone. The reward function optimizes for accurate position and yawtracking, with a small velocity regularization penalty. πandφare jointly optimized with respect toJusing the Proximal Policy Optimization (PPO) algorithm [32].3.2 Arbitrary Trajectory TrackingClassical controllers, such as differential-flatness controllers, rely on higher-order position deriva-tives of the reference trajectory for accurate tracking (velocity, acceleration, jerk, and snap), whichare needed for incorporating future information about the reference, i.e., feedforward control. How-ever, arbitrary trajectories can have undefined higher order derivatives, and exact tracking may notbe feasible. With RL, a controller can be learned to optimally track an arbitrary reference trajec-tory, given just the desired future positions pdt. Thus, we input just the desired positions, in thebody-frame, into a feedforward encoder φ, which learns the feedforward embedding that containsthe information of the desired future reference positions. For simplicity, we assume the desired yawfor all trajectories is zero. The reference positions are provided evenly spaced from the current timetto the feedfoward horizon t+H, and are transformed into the body frame.3.3 Adaptation to DisturbanceDuring training in simulation, we add a random time-varying force perturbation dto the environ-ment. We use L1adaptive control [11, 33] to estimate d, which is directly passed into our policynetwork during both training and inference. L1adaptive control first builds a closed-loop estimatorto compute the difference between the predicted and true disturbance, and then uses a low pass filterto update the prediction. The adaptation law is given by: ̇ˆv=g+Re3fΣ/m+ˆd/m+As(ˆv−v) (4a)ˆdnew=−(eAsdt−I)−1AseAsdt(ˆv−v) (4b)ˆd←low pass filter (ˆd,ˆdnew) (4c)where Asis a Hurwitz matrix, dtis the discretization step length and ˆvis the velocity prediction.Generally speaking, (4a) is a velocity predictor using the estimated disturbance ˆd, and (4b) and (4c)update and filter ˆd. Compared to other sim-to-real techniques such as domain randomization [30]and student-teacher adaptation [24], the adaptive-control-based disturbance adaptation method inDATT tends to be more reactive and robust, thanks to the closed-loop nature and provable stabilityand convergence of L1adaptive control.We note that DATT provides a general framework for adaptive control. Other methods to estimate ˆd,for example RMA, can easily be used instead, but we found them to be less robust than L1adaptivecontrol. We compare against an RMA baseline in our experiments.4 Experiments4.1 Simulation and TrainingTraining is done in a custom quadrotor simulator that implements (1) using on-manifold integration,with body thrust and angular velocity as the inputs to the system. In order to convert the desiredbody thrust fΣ,desand body rate ωdesoutput from the controller to the actual thrust and body rate forthe drone in simulation, we use a first-order time delay model:ωt=ωt−1+k(ωdes−ωt−1) (5a)fΣ,t=fΣ,t−1+k(fΣ,des−fΣ,t−1) (5b)We set kto a fixed value of 0.4, which we found worked well on the real drone. In practice, thealgorithm generalizes well to a large range of k, even when training on fixed k. Our simulatoreffectively runs at 50 Hz , with dt= 0.02for each simulation step.5We train across a series of xy-planar smooth and infeasible reference trajectories. The smoothtrajectories are randomized degree-five polynomials and series of degree-five polynomials chainedtogether. The infeasible trajectories are we refer to as zigzag trajectories , which are trajectories thatlinearly connect a series of random waypoints, and have either zero or undefined acceleration. Theaverage speed of the infeasible trajectories is approximately 2 m/s. See Appendix C for more detailson the reference trajectories.At the start of each episode, we apply a force perturbation dwith randomized direction and strengthin the range of [−3.5 m/s2,3.5 m/s2], representing translational disturbances. We then model timevarying disturbance as Brownian motion; at each time step, we update d←d+ε, with ε∈R3,ε∼ N (0,Σdt). We chose Σ= 0.01I. This is meant to model potentially complex time and state-dependent disturbances during inference time, while having few modeling parameters as we wish todemonstrate zero-shot generalization to complex target domains without prior knowledge. We runeach episode for a total of 500 steps, corresponding to 10 seconds. By default, we set Hto0.6 swith 10 feedforward reference terms. In Appendix A, we show ablation results for various differenthorizons.We also note that stable training and best performance require fixing an initial trajectory for the first2.5M steps of training (see Appendix A for more details). Only after that initial time period do webegin randomizing the trajectory. We train the policy using PPO for a total of 20M steps. Trainingtakes slightly over 3 hours on an NVIDIA 3080 GPU.4.2 Hardware Setup and the Low-level Attitude Rate ControllerWe conduct hardware experiments with the Bitcraze Crazyflie 2.1 equipped with the longer 20 mmmotors from the thrust upgrade bundle for more agility. The quadrotor as tested weighs 40 g and hasa thrust-to-weight ratio of slightly under 2.Position and velocity state estimation feedback is provided by the OptiTrack motion capture systemat50 Hz to an offboard computer that runs the controller. The Crazyflie quadrotor provides orien-tation estimates via a 2.4 GHz radio and control commands are sent to the quadrotor over the sameradio at 50 Hz . Communication with the drone is handled using the Crazyswarm API [34]. Bodyrate commands ωdesreceived by the drone are converted to torque commands τusing a custom low-level PI attitude rate controller on the firmware: τ=−KωP(ω−ωdes)−KωIR(ω−ωdes).Finally,this torque command and the desired total thrust fΣ,desfrom the RL policy are converted to motorthrusts using the invertible actuation matrix.4.3 BaselinesWe compare our reinforcement learning approach against two nonlinear baselines: differentialflatness-based feedback control and sampling-based Model Predictive Control (MPC) [4]. We alsocompare using L1adaptive control, which we propose, against RMA.Nonlinear Tracking Controller and L1Adaptive Control The differential flatness-based con-troller baseline consists of a PID position controller, which computes a desired acceleration vector,and a tilt-prioritized nonlinear attitude controller, which computes the body thrust fΣand desiredbody angular velocity ωdes.afb=−KP(p−pd)−KD(v−vd)−KIZ(p−pd) +ad−g−ˆd/m, (6a)zfb=afb||afb||,z=Re3, f Σ=a⊤fbz (6b)ωdes=−KRzfb×z+ψfbz, ψ fb=−Kyaw(ψ⊖ψref) (6c)where ˆdis the disturbance estimation. For the nonlinear baseline, we set ˆd= 0, and for L1adaptive control [11] we use (4) to compute ˆdin real time [11]. For our experiments, we setKP=diag([6 6 6]) ,KI=diag([1.5 1.5 1.5]),KD=diag([4 4 4]) ,KR=diag([120 120 0]) ,andKyaw= 13 .75. PID gains were empirically tuned on the hardware platform to track bothsmooth and infeasible trajectories while minimizing crashes.Nonlinear MPC and Adaptive Nonlinear MPC We use Model Predictive Path Integral (MPPI)[4] control as our second nonlinear baseline. MPPI is a sampling-based nonlinear optimal control6Figure 3: Left: Crazyflie 2.1 with a swinging cardboard drag plate in an unsteady wind field. Right :Comparison between our methods with and without adaptation with the drag plate on a zigzagtrajectory. With wind added, adaptation is needed, otherwise the drone crashes.technique that computes the optimal control sequence w.r.t. a known dynamics model and specifiedcost function. In our implementation, we use (1) ( d= 0) as the dynamics model with the bodythrust fΣand angular velocity ωas the control input. The cost function is the sum of the positionerror norms along k= 40 horizon steps. We use 8192 samples, dt= 0.02, and a temperature of0.05 for the softmax. For adaptive MPC, similar to prior works [18, 12], we augment the standardMPPI with the disturbance estimation ˆdfromL1adaptive control, which we refer to as L1-MPC.RMA We compare against RMA for our adaptive control baseline. Instead of using L1to estimateˆd, we train an adaptation neural network ψthat predicts ˆdfrom a history of state-action pairs usingthe RMA method (denoted DATT-RMA), similar to prior works [29]. We first train our policy πinsim using PPO as usual, but conditioned on the ground truth d. To train ψ, we then roll out πwith ˆdpredicted by a randomly initialized ψfor 500 timesteps. ψis then trained with supervised learningin order to minimize the loss ∥ˆd−d∥. We repeat this process for 10000 iterations, when the lossconverges. Our adaptation network ψtakes as input the previous seen 50 state-action pairs, and thearchitecture consists of 3 1D convolutional layers with 64 channels and a kernel size of 8 for each,followed by 3 fully connected layers of size 32 and ReLU activations.4.4 Arbitrary Trajectory TrackingWe first evaluate the trajectory tracking performance of DATT compared to the baselines in theabsence of disturbances. We test on both infeasible zigzag trajectories and smooth polynomialtrajectories. Each controller is run 2 times on the same bank of 10 random zigzag trajectories and10 random polynomials. Results are shown in Table 1. For completeness, we also compare with thetracking performance of adaptive controllers in the absence of any disturbances. We also compareour method to a version without adaptation, meaning that we enforce ˆd=0.Arbitrary trajectory tracking without external disturbancesMethod Smooth trajectory Infeasible trajectory Inference time ( ms)Nonlinear tracking control 0.098±0.012 crash 0.21L1adaptive control 0.091±0.009 crash 0.93MPC 0.104±0.009 0 .183±0.027 12.62L1-MPC 0.088±0.010 0 .181±0.031 13.10DATT (w/ ˆd= 0) 0.054±0.013 0.089±0.026 2.41DATT 0.049±0.017 0.083±0.023 3.17Table 1: Tracking error (in m) of DATT vs. baselines, without any environmental disturbances (nowind or plate). crash indicates a crash for all ten trajectory seeds.We see that DATT achieves the most accurate tracking, with a fraction of the compute cost of MPC.With our current gains, the nonlinear and L1adaptive control baselines are unable to track theinfeasible trajectory. With reduced controller gains, it is possible these controllers would not crashwhen tracking the infeasible trajectories, but doing so would greatly decrease their performance forsmooth trajectories.74.5 Adaptation Performance in Unknown Wind Fields with a Drag PlateTo evaluate the ability of DATT to compensate for unknown disturbances, we test the Crazyfliein a high wind scenario with three fans and an attached soft cardboard plate hanging below thevehicle body. Figure 3 shows this experimental setup. We note that this setup differs significantlyfrom simulation — the placement of the fans and the soft cardboard plate creates highly dynamicand state dependent force disturbances, as well as torque disturbances, yet in simulation we modelonly the force disturbance as a simple random walk. However, our policy is able to generalize wellzero-shot to this domain, as shown in Table 2.Arbitrary trajectory tracking with external disturbancesMethodSmooth traj.w/ plateSmooth traj.w/ plate & windInfeasible traj.w/ plateInfeasible traj.w/ plate & windL1adaptive control 0.163±0.013 0 .184±0.020 crash crashL1-MPC 0.121±0.010 0 .181±0.04 0 .216±0.028 0 .243±0.026DATT (w/ ˆd= 0)0.091±0.040 0 .118±0.054 0 .143±0.031 crashDATT-RMA 0.091±0.049 0 .115±0.071 0 .164±0.051 0 .193±0.075DATT 0.063±0.052 0.095±0.053 0.122±0.041 0.161±0.056Table 2: Tracking error (in m) of DATT vs. baselines, with an attached plate and/or wind. Resultsare effectively for zero-shot generalization, as we do not model a plate, torque disturbances, or exactforce disturbances in simulation.In Table 2, we see that the baseline nonlinear adaptive controller is unable to track infeasible trajec-tories, similar to the experiment without adaptation. Our method with adaptation enabled is able totrack all the trajectories tested, with the lowest tracking error. We also verify that using L1adaptivecontrol results in better performance than using RMA. We note that this is due to a large sim2realgap with the adaptation network for RMA, which we discuss in the Appendix. Figure 3 showsthe difference in tracking performance between our method using adaptive control and our methodwithout, on an example zigzag trajectory with a drag plate. We see that our approach of integratingL1adaptive control with our policy controller is effective in correcting the error introduced by thepresence of the turbulent wind field and plate. Our method performs better than L1-MPC withoutany knowledge of the target domain, and with a fraction of the compute cost. Figures 5 and 6 inthe Appendix visualizes the tracking performance of DATT vs. L1-MPC on a infeasible and smoothtrajectory, respectively.5 Limitations and Future WorkOur choice of hardware presents some inherent limitations. The relatively low thrust-to-weight ratioof the Crazyflie (less than 2) means that we are unable to fly very agile or aggressive trajectories onthe real drone or perform complex maneuvers such as a drone flip mid-trajectory. For this reason,we focused on xy-planar trajectories in this paper, and did not vary the zdirection. However, ourmethod provides the framework for performing accurate tracking for any trajectory, as we note weare able to perform a much larger range of agile maneuvers in simulation, including flips.Our simulator is only an approximation of the true dynamics. For example, we model the lower-level angular velocity controller with a simplified first-order time delay model, which limits sim2realgeneralization for very agile tasks. Furthermore, our force disturbance model is highly simplified insim, which only approximates the highly time- and state-dependent force and torque disturbancesthe drone can encounter in reality. However, we show that we can already achieve good zero-shotgeneralization to a highly dynamic environment and challenging tasks.We also note that our training process has fairly high variance and can be sensitive to the hyperpa-rameters of the PPO algorithm, typical of RL. As seen in Appendix A, we use a few tricks for stablelearning, including fixing the reference trajectory for the first 2.5M training steps. Future work isneeded to understand the role of these architectural and training features and help inform the bestalgorithm design and training setup.8AcknowledgmentsWe would like to acknowledge the Robot Learning Lab at the University of Washington for pro-viding the resources for this paper. We would also like to thank the reviewers for their helpful andinsightful comments.References[1] D. Mellinger and V . Kumar. Minimum snap trajectory generation and control for quadrotors. In2011 IEEE International Conference on Robotics and Automation (ICRA) , pages 2520–2525.IEEE, 2011. URL http://ieeexplore.ieee.org/abstract/document/5980409/ .[2] T. Lee, M. Leok, and N. H. McClamroch. Geometric tracking control of a quadrotor uav on se(3). In 49th IEEE conference on decision and control (CDC) , pages 5420–5425. IEEE, 2010.[3] M. Faessler, A. Franchi, and D. Scaramuzza. Differential Flatness of Quadrotor DynamicsSubject to Rotor Drag for Accurate Tracking of High-Speed Trajectories. IEEE Robotics andAutomation Letters , 3(2):620–626, Apr. 2018. ISSN 2377-3766, 2377-3774. doi:10.1109/LRA.2017.2776353. URL http://arxiv.org/abs/1712.02402 . arXiv: 1712.02402.[4] G. Williams, N. Wagener, B. Goldfain, P. Drews, J. M. Rehg, B. Boots, and E. A. Theodorou.Information theoretic mpc for model-based reinforcement learning. In 2017 IEEE Interna-tional Conference on Robotics and Automation (ICRA) , pages 1714–1721. IEEE, 2017.[5] S. Sun, A. Romero, P. Foehn, E. Kaufmann, and D. Scaramuzza. A comparative study of non-linear mpc and differential-flatness-based control for quadrotor agile flight. IEEE Transactionson Robotics , 38(6):3357–3373, 2022. doi:10.1109/TRO.2022.3177279.[6] J. Hwangbo, I. Sa, R. Siegwart, and M. Hutter. Control of a Quadrotor with ReinforcementLearning. IEEE Robotics and Automation Letters , 2(4):2096–2103, Oct. 2017. ISSN 2377-3766, 2377-3774. doi:10.1109/LRA.2017.2720851. URL http://arxiv.org/abs/1707.05110 . arXiv:1707.05110 [cs].[7] E. Kaufmann, A. Loquercio, R. Ranftl, M. M ̈uller, V . Koltun, and D. Scaramuzza. DeepDrone Acrobatics. In Robotics: Science and Systems XVI . Robotics: Science and SystemsFoundation, July 2020. ISBN 978-0-9923747-6-1. doi:10.15607/RSS.2020.XVI.040. URLhttp://www.roboticsproceedings.org/rss16/p040.pdf .[8] B. Kiumarsi, K. G. Vamvoudakis, H. Modares, and F. L. Lewis. Optimal and autonomouscontrol using reinforcement learning: A survey. IEEE transactions on neural networks andlearning systems , 29(6):2042–2062, 2017.[9] G. Shi, X. Shi, M. O’Connell, R. Yu, K. Azizzadenesheli, A. Anandkumar, Y . Yue, and S.-J.Chung. Neural Lander: Stable Drone Landing Control using Learned Dynamics. 2019 In-ternational Conference on Robotics and Automation (ICRA) , pages 9784–9790, May 2019.doi:10.1109/ICRA.2019.8794351. URL http://arxiv.org/abs/1811.08027 . arXiv:1811.08027.[10] M. O’Connell, G. Shi, X. Shi, K. Azizzadenesheli, A. Anandkumar, Y . Yue, and S.-J. Chung.Neural-fly enables rapid learning for agile flight in strong winds. Science Robotics , 7(66):eabm6597, 2022.[11] Z. Wu, S. Cheng, P. Zhao, A. Gahlawat, K. A. Ackerman, A. Lakshmanan, C. Yang, J. Yu, andN. Hovakimyan. L1quad:L1adaptive augmentation of geometric control for agile quadrotorswith performance guarantees. arXiv preprint arXiv:2302.07208 , 2023.[12] D. Hanover, P. Foehn, S. Sun, E. Kaufmann, and D. Scaramuzza. Performance, precision, andpayloads: Adaptive nonlinear mpc for quadrotors. IEEE Robotics and Automation Letters , 7(2):690–697, 2022. doi:10.1109/LRA.2021.3131690.9[13] A. Spitzer and N. Michael. Inverting Learned Dynamics Models for Aggressive MultirotorControl. In Robotics: Science and Systems XV . Robotics: Science and Systems Foundation,June 2019. ISBN 978-0-9923747-5-4. doi:10.15607/RSS.2019.XV .065. URL http://www.roboticsproceedings.org/rss15/p65.pdf . arXiv: 1905.13441.[14] B. Morrell, M. Rigter, G. Merewether, R. Reid, R. Thakker, T. Tzanetos, V . Rajur, andG. Chamitoff. Differential Flatness Transformations for Aggressive Quadrotor Flight. In 2018IEEE International Conference on Robotics and Automation (ICRA) , pages 5204–5210, Bris-bane, QLD, May 2018. IEEE. ISBN 978-1-5386-3081-5. doi:10.1109/ICRA.2018.8460838.URL https://ieeexplore.ieee.org/document/8460838/ .[15] E. F. Camacho and C. B. Alba. Model predictive control . Springer science & business media,2013.[16] C. Yu, G. Shi, S.-J. Chung, Y . Yue, and A. Wierman. The power of predictions in onlinecontrol. Advances in Neural Information Processing Systems , 33:1994–2004, 2020.[17] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive driving withmodel predictive path integral control. In 2016 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1433–1440. IEEE, 2016.[18] J. Pravitra, K. A. Ackerman, C. Cao, N. Hovakimyan, and E. A. Theodorou. L1-adaptivemppi architecture for robust and agile control of multirotors. In 2020 IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS) , pages 7661–7666, 2020. doi:10.1109/IROS45743.2020.9341154.[19] K. Lee, J. Gibson, and E. A. Theodorou. Aggressive perception-aware navigation using deepoptical flow dynamics and pixelmpc. IEEE Robotics and Automation Letters , 5(2):1207–1214,2020. doi:10.1109/LRA.2020.2965911.[20] Y . Zhang, W. Wang, P. Huang, and Z. Jiang. Monocular vision-based sense and avoid of uavusing nonlinear model predictive control. Robotica , 37(9):1582–1594, 2019. doi:10.1017/S0263574719000158.[21] B. Michini and J. How. L1 Adaptive Control for Indoor Autonomous Vehicles: Design Pro-cess and Flight Testing. In Proceeding of AIAA Guidance, Navigation, and Control Con-ference , pages 5754–5768, 2009. URL https://arc.aiaa.org/doi/pdf/10.2514/6.2009-5754 .[22] C. D. McKinnon and A. P. Schoellig. Unscented external force and torque estimation forquadrotors. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 5651–5657, Daejeon, South Korea, Oct. 2016. IEEE. ISBN 978-1-5090-3762-9. doi:10.1109/IROS.2016.7759831. URL http://ieeexplore.ieee.org/document/7759831/ .[23] E. Tal and S. Karaman. Accurate Tracking of Aggressive Quadrotor Trajectories using Incre-mental Nonlinear Dynamic Inversion and Differential Flatness. In 2018 IEEE Conferenceon Decision and Control (CDC) , pages 4282–4288, Miami Beach, FL, Dec. 2018. IEEE.ISBN 978-1-5386-1395-5. doi:10.1109/CDC.2018.8619621. URL https://arxiv.org/abs/1809.04048 . ISSN: 0743-1546.[24] D. Zhang, A. Loquercio, X. Wu, A. Kumar, J. Malik, and M. W. Mueller. Learninga single near-hover position controller for vastly different quadcopters. arXiv preprintarXiv:2209.09232 , 2022.[25] C. K. Verginis, Z. Xu, and U. Topcu. Non-parametric neuro-adaptive coordination of multi-agent systems. In Proceedings of the 21st International Conference on Autonomous Agentsand Multiagent Systems , AAMAS ’22, page 1747–1749, Richland, SC, 2022. InternationalFoundation for Autonomous Agents and Multiagent Systems. ISBN 9781450392136.10[26] G. Torrente, E. Kaufmann, P. Foehn, and D. Scaramuzza. Data-Driven MPC for Quadrotors.IEEE Robotics and Automation Letters , 2021. ISSN 2377-3766, 2377-3774. doi:10.1109/LRA.2021.3061307. URL http://arxiv.org/abs/2102.05773 . arXiv: 2102.05773.[27] A. Spitzer and N. Michael. Feedback Linearization for Quadrotors with a Learned Accel-eration Error Model. In 2021 IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 6042–6048, May 2021. doi:10.1109/ICRA48506.2021.9561708. URLhttps://ieeexplore.ieee.org/document/9561708 . ISSN: 2577-087X.[28] G. Shi, W. H ̈onig, X. Shi, Y . Yue, and S.-J. Chung. Neural-swarm2: Planning and control ofheterogeneous multirotor swarms using learned interactions. IEEE Transactions on Robotics ,38(2):1063–1079, 2021.[29] A. Kumar, Z. Fu, D. Pathak, and J. Malik. RMA: Rapid Motor Adaptation for Legged Robots,July 2021. URL http://arxiv.org/abs/2107.04034 . arXiv:2107.04034 [cs].[30] A. Molchanov, T. Chen, W. H ̈onig, J. A. Preiss, N. Ayanian, and G. S. Sukhatme. Sim-to-(Multi)-Real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors.arXiv:1903.04628 [cs] , Apr. 2019. URL http://arxiv.org/abs/1903.04628 . arXiv:1903.04628.[31] E. Kaufmann, L. Bauersfeld, and D. Scaramuzza. A Benchmark Comparison of Learned Con-trol Policies for Agile Quadrotor Flight, Feb. 2022. URL http://arxiv.org/abs/2202.10796 . arXiv:2202.10796 [cs].[32] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. CoRR , abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347 .[33] N. Hovakimyan and C. Cao. L1 Adaptive Control Theory: Guaranteed Robustness with FastAdaptation . Society for Industrial and Applied Mathematics, 2010.[34] J. A. Preiss, W. Honig, G. S. Sukhatme, and N. Ayanian. Crazyswarm: A large nano-quadcopter swarm. In 2017 IEEE International Conference on Robotics and Automation(ICRA) , pages 3299–3304, 2017. doi:10.1109/ICRA.2017.7989376.[35] A. Raffin, A. Hill, A. Gleave, A. Kanervisto, M. Ernestus, and N. Dormann. Stable-baselines3:Reliable reinforcement learning implementations. Journal of Machine Learning Research , 22(268):1–8, 2021. URL http://jmlr.org/papers/v22/20-1364.html .11A AblationsAblation Tracking error (sim) ( m)No body frame failedNo fixed intial reference 0.437±0.08No feedback term 0.077±0.011Feedforward horizon 1 ( H= 0.02s) failedFeedforward horizon 5 ( H= 0.3s) 0.240±0.008Feedforward horizon 10 ( H= 0.6s) (used in main experiments) 0.055±0.007Feedforward horizon 15 ( H= 0.9s) 0.073±0.010Feedforward horizon 20 ( H= 1.2s) 0.101±0.018Base policy (no ablation) 0.046Table 3: Tracking error (in m), in simulation, of various ablations after 15M training steps. Failedindicates the drone diverges from the reference trajectory. Tracking error is with respect to infeasiblezigzag trajectories. The ablations are done without adaptation, and with no disturbances in theenvironment. 5 runs were attempted for each ablation.We test various ablations of our primary method, with results shown in Table 3. In particular, wetest•No body frame : With our training setup, we found that transforming all state inputs (exceptfor the orientation) into the body frame was necessary for accurate trajectory tracking. Thisablation tests our method, but with the position p, velocity v, and reference positions in theworld frame instead of the body frame.•No fixed initial reference This ablation removes the initial 2.5M training steps where wedo not randomize the reference trajectory. We see that PPO converges to a much worsetracking performance. We note that the choice of the initial fixed reference does not havemuch impact on the variance of training, only the existence of the fixed reference.•No feedback term We remove the feedback term R⊤(pt−pdt)from our controller in-puts. This term might appear redundant with the reference trajectory, but we find explicitlyconditioning on the feedback error consistently results in slightly more accurate tracking.•Feedforward horizon We test varying sizes of our feedforward horizon. In Table 3, Feed-forward horizon Nrefers to passing in Nfuture reference positions. As described in Sec-tion 3.2, we linearly space the Nreference positions across time from ttot+H.•Base policy For comparison, we list the tracking error in sim of the main policy that weuse in our experiments section.Adaptive Control in Simulation As seen in Table 4, in simulation, using RMA as the adaptivecontrol strategy actually yields slightly better performance than L1adaptive control. However, onthe real drone, as we report in Table 2, RMA performs significantly worse than L1adaptive control,indicating a significant sim2real gap. This is likely because the adaptation network in DATT-RMAis highly susceptible to the domain shift in state-action pair inputs on the real drone, while theclosed-loop nature of L1 guarantees fast disturbance estimation for any state-action pairs.Method Tracking error (sim) ( m)DATT (w/ disturbances) 0.062±0.011DATT-RMA (w/ disturbances) 0.055±0.009Table 4: Tracking error (in m), in simulation, of standard DATT (using L1adaptive control) andDATT-RMA with random force disturbances12B Training Details and Network ArchitectureTraining is done with the PPO implementation in the Stable Baselines3 library [35]. All PPO pa-rameters are left as default.The feedforward encoder architecture consists of 3 1-D convolution layers with ReLU activationsthat project the reference positions into a 32-dim representation for input to the main policy. Each1-D convolution has 16 filters with a kernel size of 3. The main policy network is a 3-layer MLPwith 64 neurons per layer and ReLU activations, and the value network shares this structure.C Reference Trajectory Details0.50.00.5X (m)0.0 2.5 5.0 7.5 10.0Time (s)0.50.00.51.0Y (m)0.20.00.20.0 2.5 5.0 7.5 10.0Time (s)0.20.00.20.4Figure 4: Left: Example of a random zigzag trajectory (infeasible). Right: Example of a randomchained polynomial trajectory (smooth).C.1 Smooth TrajectoryFor smooth trajectories, we include a mix of degree 5 polynomials and chained polynomials . Poly-nomials start at x= 0 andy= 0, and return to the origin after 10 s, corresponding to our episodelength. They are randomly generated by randomly selecting initial and end conditions. Chainedpolynomials are a series of random polynomials. We generate these trajectories by randomly se-lecting “nodes” at x= 0 andy= 0 at random times between 0 sand10 s, and fitting degree 5polynomials between each node, ensuring that first, second, and third order derivatives are continu-ous at each node. Note that these trajectories are not guaranteed to be feasible, although in practicethey are easy to track as they are highly smooth.C.2 Infeasible TrajectoryWe use a class of what we refer to as zigzag trajectories . We generate these trajectories by randomlyselecting time intervals between 0.5and1.5seconds, randomly generating waypoints after eachtime interval, and linearly connecting each waypoint. The waypoints can vary from −1 mto1 minboth the xandydirections. By training on these zigzags, we are able to generalize well to a widevariety of trajectories, including polygons and stars as seen in Figure 1, which are similar to randomzigzags.C.3 Additional Figures of ResultsWe show additional figures from our results from Table 2. Figure 7 shows the values of the predictedˆdover time on an environment with wind versus one without wind. Figure 5 and Figure 6 show ourtracking performance against L1-MPC for a smooth and infeasible trajectory, respectively.130.50.0X (m)1.00.5Y (m)8 10 12 14 16Time (s)0.60.70.8Z (m)Reference DATT L1-MPCFigure 5: Performance of DATT against L1-MPC on a smooth trajectory with both wind and a plateattached.10X (m)1.51.00.5Y (m)6 8 10 12 14Time (s)0.60.8Z (m)Reference DATT L1-MPCFigure 6: Performance of DATT against L1-MPC on an infeasible trajectory with both wind and aplate attached.145.0 7.5 10.0 12.5 15.0010X (m / s^2)5.0 7.5 10.0 12.5 15.0time (s)2.50.02.5Y (m / s^2)without wind with windFigure 7: Predicted ˆdterms on two infeasible trajectories, one with wind, one without wind but withan air drag plate.15 |
RcZMI8MSyE | Large Language Models as General Pattern MachinesSuvir Mirchandani1, Fei Xia2, Pete Florence2, Brian Ichter2, Danny Driess2 3,Montserrat Gonzalez Arenas2, Kanishka Rao2, Dorsa Sadigh1 2, Andy Zeng21Stanford University,2Google DeepMind,3TU Berlinhttps://general-pattern-machines.github.ioAbstract: We observe that pre-trained large language models (LLMs) are capable of au-toregressively completing complex token sequences—from arbitrary ones procedurallygenerated by probabilistic context-free grammars (PCFG), to more rich spatial pat-terns found in the Abstraction and Reasoning Corpus (ARC), a general AI benchmark,prompted in the style of ASCII art. Surprisingly, pattern completion proficiency can bepartially retained even when the sequences are expressed using tokens randomly sampledfrom the vocabulary. These results suggest that without any additional training, LLMscan serve as general sequence modelers, driven by in-context learning. In this work, weinvestigate how these zero-shot capabilities may be applied to problems in robotics—from extrapolating sequences of numbers that represent states over time to completesimple motions, to least-to-most prompting of reward-conditioned trajectories that candiscover and represent closed-loop policies (e.g., a stabilizing controller for CartPole).While difficult to deploy today for real systems due to latency, context size limitations,and compute costs, the approach of using LLMs to drive low-level control may providean exciting glimpse into how the patterns among words could be transferred to actions.Keywords: large language models, in-context learning, language for robotics1 IntroductionLarge language models (LLMs) are trained to absorb the myriad of patterns that are woven into the structureof language. They not only exhibit various out-of-the-box capabilities such as generating chains of reasoning[1,2], solving logic problems [ 3,4], and completing math puzzles [ 5], but also have been applied in roboticswhere they can serve as high-level planners for instruction following tasks [ 6,7,8,9,10,11,12], synthesizeprograms representing robot policies [ 13,14], design reward functions [ 15,16], and generalize user pref-erences [ 17]. These settings rely on few-shot in-context examples in text prompts that specify the domainand input-output format for their tasks [18, 19], and remain highly semantic in their inputs and outputs.input: 0, 0, 0, 0 0, 3, 4, 0 0, 7, 6, 0 0, 0, 0, 0 output: 3, 0, 0, 4 0, 0, 0, 0 0, 0, 0, 0 7, 0, 0, 6input: 0, 0, 0, 0 0, 5, 6, 0 0, 8, 3, 0 0, 0, 0, 0 output: 5, 0, 0, 6 0, 0, 0, 0 0, 0, 0, 0 8, 0, 0, 3input: 0, 0, 0, 0 0, +#, B, 0 0, @, 慶, 0 0, 0, 0, 0 output: +#, 0, 0, B 0, 0, 0, 0 0, 0, 0, 0 @, 0, 0, 慶 Fig. 1 :LLMs out-of-the-boxcan complete ( highlighted )complex ARC patterns [ 20]expressed in arbitrary tokens.A key observation of our work—and perhaps contrary to the predominantintuition—is that an LLM’s ability to represent, manipulate, and extrapolatemore abstract, nonlinguistic patterns may allow them to serve as basic versionsofgeneral pattern machines . To illustrate this idea, consider the Abstractionand Reasoning Corpus [ 20], a general AI benchmark that contains collectionsof 2D grids with patterns that evoke abstract concepts (e.g., infilling, counting,and rotating shapes). Each problem provides a small number of input-outputexamples, followed by test input(s) for which the objective is to predictthe corresponding output. Most methods (based on program synthesis) aremanually engineered with domain-specific languages [ 21,22,23,24] orevaluated on simplified extensions or subsets of the benchmark [ 25,26,27].End-to-end machine learning methods only solve a handful of test problems[28]; however, our experiments indicate that LLMs in-context prompted inthe style of ASCII art (see Fig. 1) can correctly predict solutions for up to 85(out of 800) problems—exceeding some existing recent systems [ 21,22,24], without additional modeltraining or fine-tuning. Surprisingly, we find this extends beyond ASCII numbers, and that when they7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.|100100 - ··· 78, 76, 72, 66, 60, 53, 46 ··· Fig. 2 :Pre-trained LLMs out-of-the-box may serve as basic versions of general pattern machines that can recognize andcomplete sequences of numeric or arbitrary (symbolic) tokens expressing abstract problems in robotics and sequentialdecision-making. Experiments show that to an extent, LLMs can in-context learn (i) sequence transformations (e.g.,to reason over spatial rearrangements of symbols, for dynamics modeling and next state prediction on downsampledimages), (ii) completion of simple functions (e.g., to extrapolate kinesthetic demonstrations), or (iii) meta-patterns toimprove return-conditioned policies (e.g., to discover oscillatory behaviors to stabilize a CartPole).are replaced with a mapping to randomly sampled tokens in the vocabulary, LLMs can still generatesome valid solutions. These results suggest an intriguing insight: that LLMs may exhibit more generalcapabilities of representing and extrapolating symbolic patterns, invariant to the specific tokens involved.This is in-line with—and complementary to—recent observations that using random or abstract labelmappings for in-context classification retains some performance compared to ground-truth labels [ 29,30].We hypothesize that the capabilities that drive pattern reasoning on the ARC may allow general patternmanipulation at various levels of abstraction useful for robotics and sequential decision making [ 31,32],wherein a diverse array of problems involve patterns that may be difficult to reason about precisely inwords. For example, a procedure for spatially rearranging tabletop objects could be represented usingarbitrary tokens (see Fig. 2). As another example, optimizing a trajectory with respect to a reward functioncan be framed as extrapolating a sequence consisting of state and action tokens with increasing returns.Orthogonal and complementary to efforts that develop multi-task policies by pre-training on large amountsof robot data [ 33], or robotics foundation models [ 34] that can be fine-tuned for downstream tasks [ 35,36,37], our goal is instead to (i) assess the zero-shot capabilities that LLMs may already contain to performsome degree of general pattern manipulation, and (ii) investigate how these abilities can be used in robotics.These capabilities are certainly notsufficient to replace specialized algorithms; nonetheless, they are usefulto characterize, and doing so may help inform priorities for training generalist models in robotics.We assess LLMs as pattern machines categorized into three areas: sequence transformation, sequencecompletion, and sequence improvement (Fig. 2). First, we show that LLMs are capable of generalizingcertain sequence transformations of increasing complexity with a degree of token invariance, and positthat this can carry over to spatial reasoning capabilities in robotic tasks. Next, we assess LLMs’ ability tocomplete patterns from simple functions (e.g., sinusoids) and show this can be applied to robotic tasks likeextending a wiping motion from kinesthetic demonstrations, or drawing patterns on a whiteboard. Thecombination of in-context sequence transformation and extrapolation further enables LLMs to do basicforms of sequence improvement. We show that providing reward-labeled trajectories as context, coupledwith online interaction, can enable an LLM-based agent to learn to navigate through a small grid, discovera simple CartPole controller, and optimize simple trajectories via human-in-the-loop “clicker” rewardtraining. Code, benchmarks, and videos are made available at https://general-pattern-machines.github.io.22 Related WorkIn-Context Learning. Pattern reasoning by prompting pre-trained LLMs with few-shot input-outputexamples is driven by in-context learning [ 38,39]. The examples serve as a form of task specification,where the model is expected to complete further instances of the task by predicting what comes next. In-context learning extends the concept of “task prefixes” (predefined token sequences, e.g., [ 40]), but swappedin with examples instead. Brown et al. [39]observe that it improves (in particular, out-of-distributiongeneralization) from scaling model size. This is in contrast to scaling models for pre-training + fine-tuning,which has been shown to not necessarily improve OOD generalization on language tasks [ 41]. Nonetheless,despite compelling OOD generalization abilities, in-context learning still comes at a cost, as it continues tolag behind in terms of absolute performance on benchmarks compared to task-specific fine-tuning [ 38,42].Explanations of In-Context Learning. In-context learning is explicitly trained for by packing examplesfrom the same task and dataset into a context buffer that is fed as input to an LLM with an unsupervisedautoregressive objective [ 39], sometimes referred to as meta-training. However, it can also emerge implicitlyfrom training on datasets where tokens exhibit a Zipfian distribution [ 43] on Transformer architectures,but not necessarily with recurrent architectures (e.g., vanilla RNNs) [ 43]. Other works have shown that in-context learning with Transformers can learn simple function classes on par with least squares [ 44,45,46],and can generalize to a seemingly unbounded number of tasks (when trained on tasks from the same taskfamily) better than multitask MLPs [47], with Bayesian interpretations of this phenomenon [48] [49].In-Context vs. In-Weights Learning. In-context learning occurs during inference without gradient updatesto the model weights, and can be differentiated from in-weights learning, which relies on informationstored in the model weights during LLM training [ 50] (and can be useful for completion tasks such as“Abraham Lincoln was born in ”). Chan et al. [50]observes that generalization of in-context learningcan be characterized as more “exemplar-based” (on the basis of similarity to in-context examples [ 51]),as opposed to generalization of in-weights learning which tends to be more “rule-based” (on the basisof minimal features that support category boundaries in the training data [ 52]). The vast capabilities ofLLMs [ 39,53,54,55,56] have been driven by a combination of both forms of learning. In this work, weare particularly interested in in-context learning, and (depending on the task) using the semantic priors ofnumeric tokens to drive capabilities such as sequence completion (Section 5) and improvement (Section 6).LLMs and Robotics. LLMs have been applied across several areas in robotics—such as decomposinghigh-level task descriptions to mid-level plans [ 6,7,57,58,59,60], robot code [ 13,17,14,61], and plan-ning domain definition languages [ 10]. These methods leverage semantic priors stored in LLMs to composeplans or parameterize primitive APIs, but whether LLMs can directly influence control (e.g., at the level oftrajectories) in a zero-shot manner remains an open problem. We explore how pattern reasoning capabilitiesof LLMs may drive various control tasks, to extend or optimize low-level sequences. While it is possible toexplicitly train models for these capabilities [62, 63, 64, 65], this work focuses on the inherent abilities ofLLMs out-of-the-box, which may have implications for the role of language pre-training for building em-bodied AI systems. Related to our work are [ 42] which studies how LLMs perform on non-language classi-fication and regression tasks; [ 66] which examines analogical reasoning in various text tasks; and [ 67] whichstudies how LLMs can represent a rollout policy and world model in-context and then uses Q-learning todrive policy improvement across a collection of toy environments with linguistic representations. Our use ofLLMs for sequence improvement can be seen as a simplification of in-context policy iteration that supportslearning from demonstrations and in-context RL, driven by the generality of LLMs as pattern machines.3 Language Models as General Pattern MachinesThe capacity of LLMs to act as general pattern machines is driven by their ability to perform in-contextlearning on sequences of numeric or arbitrary tokens. An LLM typically represents sequence modelingautoregressively, with a decoder-only Transformer [ 68], by factorizing the probability of a sequencex, which is a sequence of symbols (s1, ..., s n), into the product of conditional probabilities p(x) =∏︁ni=1p(si|s1, ..., s i−1). To perform in-context learning, the model can be conditioned with a prompt thatprovides the initial tokens in the sequence s1:k=(s1, ..., s k)and uses the model to complete sk+1:n.3The adaptability of in-context learning lies in the amount of flexibility that can be packed into s1:k—thisprompt sequence can itself contain many sequences, each an input-output pair, and perhaps additional taskconditioning [ 38,29]. Specifically, a model can in-context learn to complete a prompt which is a set of Nexamples s1:k=(x1, x2, ..., xN)where each xiis a variable-length sequence (si1, si2, ..., simi).Rather than investigating in-context learning with natural language tasks [ 39], in this work we are interestedin investigating more abstract notions of non-linguistic patterns. The following sections evaluate thesecapabilities across LLMs, and show how they can be used in robotics. By varying the notion of what eachxishould be, we can characterize in-context pattern learning capabilities into the following 3 categories.•Sequence Transformation (Section 4): each x1, ..., xN−1is a sequence-to-sequence input-output pair;i.e.,xi=(xiinput,xioutput), each subsequence of variable length, and xNis the query input (xNinput).•Sequence Completion (Section 5): rather than containing input-output pairs, and rather than containingmany examples of different sequences, the prompt x=(s1, ...,s k)corresponds to discrete samples froma single function, e.g., of the form si=a·sin(bi), which can be extrapolated.•Sequence Improvement (Section 6): each x1, ..., xN−1is a collection of trajectories (potentially labeledwith corresponding total rewards), and xNprompts the model to “improve” the sequences by inferring abetter one, e.g., with least-to-most prompting [ 69]—this process can be iterative and applied to a varietyof formulations, e.g., offline trajectory optimization or online in-context reinforcement learning.4 Sequence TransformationLLMs are capable of in-context learning the distribution of functions that represent sequence transformationsby completing abstract patterns observed among examples of input-output sequences xi=(xiinput,xioutput)ofarbitrary tokens, each drawn from a fixed alphabet A. For example, suppose that we are given a string ofinput-output examples such as “ 5 3 0, 3 5; 7 6 1, 6 7; 9 2 3, 2 9; 4 8 5, ”. Here Aconsistsof tokens that represent space-prefixed digits 0–9, a comma token to separate inputs from outputs, anda semi-colon token to delineate examples from each other. A general pattern machine should infer thecompletion “ 8 4” by recognizing that the pattern is to swap the first 2 tokens, then remove the 3rd.Method Total (of 800)(g4) gpt-4-0613 77(d3) text-davinci-003 85(d3) w/ random A†44±6(d2) text-davinci-002 [53] 64(p) PaLM [55, 56] 42(d1) text-davinci-001 [39] 11(d1) finetuned 9Ainooson et al., 2023 [23]∗130Kaggle 1st Place, 2022 [70]#164Xu et al., 2022 [22]††57Alford et al., 2021 [24]∗∗22Ferr ́e et al., 2021 [21] 32†Numbers averaged across 5 randomly sampled alphabets.∗Based on brute force search over a hand-designed DSL.#Reported out of 400 train tasks, among 3 candidates.††Reported out of a subset of 160 object-oriented problems.∗∗Based on program synthesis, out of 36 symmetry tasks.Tab. 1 :LLMs out-of-the-box can solve a non-trivial number of ARC problems.We use the ARC [ 20] to evaluate LLMs on such sequencetransformations that are substantially more complex, coveringa range of abstract spatial tasks: infilling, counting, rotatingshapes, etc. Each task has input-output examples (3.3 on aver-age), and 1-3 test inputs which can be represented as 2D grids.Input and output sizes may differ. LLMs can be used for theARC by flattening grids and predicting output grid items inrow-major order, which naturally supports variable-length out-puts. While LLMs are not specifically trained for rasterizingspatial outputs, we hypothesize that a general pattern machinewould be capable of implicitly recognizing long-range depen-dencies between rows (using positional encoding as a bias[71]) to pick up patterns that extend across the 2nd dimension.Result: ARC benchmark. Table 1 shows that LLMs (PaLM,InstructGPT series in acronyms d1 - d3) prompted with inputgrids represented as tokens drawn from an alphabet of digits,can correctly infer solutions for up to 85 problems. Surpris-ingly, this outperforms some recent systems [ 21,22] based onprogram synthesis that use manually engineered domain-specific languages (DSLs). While LLMs have yetto surpass brute-force search [ 23] to compose functions from a handcrafted API of grid operators, they doexhibit non-trivial performance. (We address the important caveat that parts of the ARC may be present inthe training data of LLMs later below.) Note that while we are concerned with LLM performance over rawpatterns, concurrent work finds improvements via object representations [72] and hypothesis search [73].Observation: consistent tokenization matters. The ARC can be found among the suite of tasks in BIG-Bench [ 74], but has often been overlooked since many language models appear to perform poorly (near or4at zero performance). We observe this occurs due to the formatting of the benchmark, where grid elementsare represented as neighboring characters i.e., “ 8686 ” (instead of “ 8 6 8 6 ”). While subtle, this differenceis enough for certain Byte-Pair Encoding (or SentencePiece) tokenizers [ 75,76] (that do not tokenize perdigit) to group multiple grid elements (“ 8” and “ 6”) into a single token (“ 86”) which maps to a differenttoken embedding. This causes inconsistencies with how patterns are expressed at the token level. Forexample, given a task expressed as “ 8686, 6868; 7979, ” if the tokenizer groups pairs of digits 86,68,79, respectively, the sequential inductive patterns of the task (to swap and repeat individual digits) are lost.A simple work-around is to directly pass token indices or embeddings to the language model, or use tokenalphabets unlikely to be grouped together (which involves some knowledge about the tokenizer). Evenbeyond the ARC, we observe it is beneficial to tokenize consistently with the pattern being represented.Observation: token mapping invariance. The hypothesis that LLMs can serve as general pattern machinesstems from the observation that they can still solve a non-trivial number of ARC problems using alphabetsAsampled randomly from the LLM’s token vocabulary. For instance, given a particular alphabet: {8↦→falls ,6↦→+#,7↦→Ul,9↦→Chev ,3↦→慶,2↦→2010}, a pattern machine at sufficientproficiency can be expected to complete the prompt “ falls +# falls +#, +# falls +# falls; UIChev UI Chev, Chev UI Chev UI; 慶2010慶2010, ” by predicting “ 2010慶2010慶”. Forexample, text-davinci-003 [ 53,39] with the following mapping A={0↦→offence ,1↦→Subject ,2↦→Lub,3↦→Fail ,4↦→Chev ,5↦→symb ,6↦→swung ,7↦→Ul,8↦→escalate ,9↦→Chromebook }solves 52 ARC problems, and across 5 random alphabets solves an average of 43.6 problems.Interestingly, we find that token mapping invariance holds to an extent on patterns over randomly sampledembeddings as well (not associated with any token in the vocabulary; see Appendix A.3).The implications of token mapping invariance are two-fold. First, note that it is possible that parts of theARC are present in the LLM’s training data (i.e., due to contamination). Thus, measuring the performanceof LLMs under random alphabets may provide a closer estimate of their underlying sequence transformationcapabilities. (As further evidence that these abilities are not simply due to memorization, we provide a newprocedurally-generated pattern transformation benchmark described below.) Second, we hypothesize thatthe pattern manipulation capabilities implied by token invariance could help drive positive transfer frompatterns learned across Internet-scale language data to new modalities or symbolic representations for robotreasoning. As an example, (i) Fig. 10 (top) in the Appendix shows a grasp (Skittles) detector which outputstarget coordinates within a downsampled image (with 6 in-context examples), and (ii) Fig. 10 (bottom)shows spatial rearrangement via predicting simple forward dynamics where the red bowl moves to thegreen plate (with 9 in-context examples of downsampled images as inputs and outputs). The generalityof what the arbitrary tokens could represent may allow pattern transformation capabilities—especially asLLMs improve—to be leveraged at various levels of abstraction in robotics (e.g., pixels or joint positions).Method Accuracy (%)(d3) text-davinci-003 75(d3) w/ random A†58±1(p) PaLM [55, 56] 74(d2) text-davinci-002 [53] 69(d1) text-davinci-001 [39] 60(c1) text-curie-001 54(b1) text-babbage-001 50(a1) text-ada-001 39†Numbers averaged across 5 randomly sampled alphabets.Tab. 2 :LLMs of varying sizes are capableof completing patterns procedurally generatedwith PCFG, averaged over a range of kandw.Result: PCFG benchmark. The ARC is a difficult bench-mark, and the performance falloff can be steep (and relativelyuninformative) across LLMs with decreasing model size anddata scale, making it difficult to measure incremental progresstowards pattern machines that could be used for sequence trans-formation in robotics. Therefore, we introduce an adjustable-difficulty benchmark, where the transformations are procedu-rally generated using the probabilistic context-free grammar(PCFG) in Hupkes et al. [77]. These transformations includea collection of lexical rules that may be composed (e.g., re-verse ,shift ,swap ,repeat , etc.) over the tokens in theinput sequence xiinputto generate xioutput. Example transforma-tions are given in Table 4 in the Appendix. The complexityof these transformations can be controlled by varying the number of tokens kused to express sequencesxi=(s1,...,sk), and the number of lexical rules wused to define the transformation. This is simply the iden-tity function when w=0, and progressively appears more complex as w→∞ . Table 2 aggregates PCFGpattern completion accuracy across different LLMs over sequence length k= [1,2,4,8,16,32]and com-plexity w=[0,1,3,7,15,31], each with 100 runs (see Appendix A.4 for ablations of k,w). This benchmark5provides a more unbiased evaluation of pattern reasoning capabilities in LLMs; PCFG completion accuracyimproves with model scale, and correlates with ARC performance. We use PCFG for evaluation only (ratherthan for training [ 77,78]) so that one can measure how pre-training regimes or modalities may improvegeneral pattern capabilities across sequence transformations. We have released the PCFG benchmark.5 Sequence CompletionCompletion of sinusoids. We start with a simple example where LLMs extrapolate a function of the formf(x)=a·sin(bx). As in Section 4, tokenization matters; we found it effective to discretize outputs amongintegers 0–100, as these integers are represented by single tokens in the tokenizers of the LLMs we tested.02x100y02x100y0300 Error02x100y02x100y0300 Error02x100y02x100ya1b1c1d1d2d30300 ErrorFig. 3 :LLMs (d3 shown) can extrapolate various func-tions y=a·sin(bx)(top row), y=ax·sin(bx)(middlerow), and y=a2xsin(bx)(bottom row) given amounts ofcontext. Overall, larger models make better predictionswith lower error rates (right column). More context alsohelps prediction accuracy (light vs. dark).Fig. 3 shows completions of the sine wave by text-davinci-003 over 11 trials given 3 and 5 periods ascontext, as well as average distance (computed by Dy-namic Time Warping) of the generated predictions tothe ground truth function values across several LLMs.Multiple LLMs produce near-perfect continuationsof the sine wave, especially with more context (i.e.,more periods of the sine wave). We additionally testthe function family ax·sin(bx)—in which the am-plitude of the oscillations increases with x-values.Here, the LLM must extrapolate to new values un-seen in the context, which highlights the utility ofusing a metric space for the outputs (0–100) wherethe LLM has priors over the scale of the different to-kens. These functions also contain a “meta-pattern”:they-values increase, decrease, and then increase ina single period—and the amplitude of the functionalso increases over time. This is a form of least-to-most prompting [ 69], an ability we find useful later forsequence improvement in Section 6. We also test the functiona2x·sin(bx). Across these three functions,we observe that greater context and larger scale LLMs yield higher quality predictions.Completion of periodic motions. We emphasize that the Sequence Completion capability above is domain-agnostic—i.e., we do not use any specialized prompts explaining what function should be completed, nordo we provide any linguistic grounding for the metric tokens. We can therefore operationalize this zero-shotcapability of LLMs to simple open-loop motion extrapolation problems in robotics, e.g., by encoding aseries of positions sampled from a demonstration, and predicting future positions. We test two simple taskson a mobile robot manipulator: Table Sweeping andWhiteboard Drawing (both shown in Fig. 2).InTable Sweeping , the goal is to continue a human-provided kinesthetic demonstration of sweeping aportion of a table (see middle Fig. 2). We encode the demonstration as a series of end-effector poses atapproximately 3 Hz. Each demonstration lasts roughly 20-30 seconds. We represent the 7-dim end-effectorpose as a concatenation of Cartesian position and the quaternion, where each value is binned to an integerbetween 0 and 100, and the dimensions are delimited by spaces. We collect 30 demonstrations thatdemonstrate the sweeping motion. Note that demonstrations in this task are noisier and higher dimensionalthan the stylized sinusoid functions above. For each demonstration, we construct a context to consist of thefirst two-thirds of the provided demonstration, and treat the last one-third as the ground truth for the LLMto predict. Larger models quantitatively perform better with generally lower variance (see Appendix).InWhiteboard Drawing , the goal is to continue a scripted demonstration of drawing loops on a whiteboard(see Fig. 2). Loops are defined by parametric equations of the form x=axcos(bt) +dxandy=aysin(bt)+cyt+dy. We execute the motions using position control and record the end-effector positions at5 Hz, then discretize states in between 0 and 300, as finer motion is needed for this task. We provide part ofthe loop pattern in-context, and assess the ability to extrapolate from 2 loops to do a third loop. LLMs, e.g.,text-davinci-003 perform well—we show more completions with different loop styles in the Appendix.66 Sequence ImprovementIn this section, we explore the synergies between sequence transformation and completion— and investigateimproving a sequence, such as trajectories in a sequential decision process, along some metric, such as areward function. Here, we use an LLM to generate new sequences xNconditioned on previous sequences(x1, ..., xN−1), which can represent previous iterations of the same sequence (or policy it represents).The improvement can also be return-conditioned, given a reward function r(·). By inserting as the firsttoken(s) of each sequence its corresponding total reward x=(r(x),s1, ..., s k), we can prompt the model toconditionally “improve” by “just asking” [ 79] for a higher reward than those seen in-context (i.e., promptingLLMs to act as Decision Transformers [ 80]). New “rollouts” can yield new reward labels that then replacethe original desired rewards with actual rewards. Iteratively performing this inference and accumulatingtrajectories may jointly use the model’s general notion of pattern transformation and extrapolation toperform improvement of sequences, which can be represented by numeric or symbolic tokens. Note thatthere are practical considerations, e.g., depending on the task or model, not all sequences can fit in context,so options could be to keep the most recent, or the ones with the highest rewards if available (see Appendixfor more discussion). In this section, we perform a series of targeted experiments on simple tasks, aimingto explore the possibility of using LLMs for sequence improvement in trajectory and policy optimization.Fig. 4 :LLM agents can generate new trajecto-ries with increasing returns for a Marker in Cuptask (right). Performance varies with differentways of building the context (left).Extrapolating simple meta-patterns among trajectories.Sequence improvement with LLMs enables a simple form oftrajectory optimization for a Marker in Cup task on a FrankaPanda robot, where we define the prefixed reward of a trajec-tory to be the negative distance between the final end-effectorposition and the cup (normalized between 0–100), and initial-ize the context with a collection of trajectories (stopping at20%, 40%, 60%, and 80% of the way to the cup), delimited bynewlines and prefixed by rewards (ranging roughly from 70-90; see Appendix). For this task, we represent trajectories assequences of Cartesian positions, each dimension normalizedbetween 0–100. We find that text-davinci-003, to an extent,is able to generalize the pattern and generate a trajectory that achieves a reward >90. For this extrapolationto occur, we observe that meta-patterns in the context are crucial: in Fig. 4 (left), we compare the averagereward achieved by text-davinci-003 over 11 trials (each with a different goal position) given contexts withdifferent trajectory orderings (sorted by reward, randomly permuted, or with/without reward annotations).Fig. 5 :Average max returnfor LLM agents a1-d3 onGrid compared to randomexploration (r).Sampling higher-reward trajectories online. While LLMs can extrapolatefrom trajectories that exhibit clear meta-patterns among them, we find thatthis ability is more limited for less trivial setups. Consider a simple 9×9Grid navigation environment with a random goal position and a fixed startingposition at the grid center. Episodes terminate after 20 timesteps, and the returnis based on the distance from the agent to the goal at the final time step. Thisenvironment is inspired by the Dark Room environment from [ 62] but with acontinuous reward function, reducing the exploration challenge. The agent maytake actions (1-5) corresponding to moving right, up, left, down, and no-op. Weinitialize the context buffer with 20 trajectories of agent grid positions generatedby a random policy, sorted by total cumulative rewards. These trajectoriesexhibit a more complicated meta-pattern than in the Marker in Cup task; wedo not find that LLMs can generate trajectories of higher reward immediately. With that said, we canconsider an iterative, online setting, in which the LLM acts as an agent that interacts with the environmentin a closed-loop. The context consists of the highest reward trajectories in sorted order, appended with ahigher reward than was seen in the context, plus states and actions from the current partial trajectory (seeAppendix). Once an episode terminates, its trajectory is relabeled with the reward achieved, and insertedinto the context at the appropriate position. In Fig. 5, we plot the maximum return attained by a1-d3 over50 episodes, compared to random exploration, averaged over 5 trials. We find that a1-d1 tend to sometimes“exploit” the suboptimal behaviors represented in the context (which initially contains trajectories withrewards ranging from 6-78), whereas d3 can consistently find a solution to Grid within 50 episodes.7Fig. 6 :Different LLM agents (d3 - c1) on average canimprove trajectories (total rewards) with more CartPoleepisodes (left), and discovers “oscillatory behaviors” (right)to keep the CartPole upright (later episodes are brighter).Discovering a simple CartPole controller. Weshow that using LLMs as agents in an online,closed-loop setting can discover a simple controllerforCartPole (where observations consist of poleangle and velocity, normalized to 0–100, actionsare 1 (left) and 2 (right), maximum horizon is 200).Fig. 6 (left) shows that return (number of stepsthe CartPole is kept upright) improves on averageacross various LLMs over 100 episodes (wherethe first 100 are generated by random exploration).Fig. 6 (right) shows the evolution of trajectoriesover episodes of d3, demonstrating that it discovers oscillatory behaviors to keep the CartPole upright.050Goal reached Goal - State ( y)State ( x)State ( z)Episode resets # Steps Reward signal (online) Online in-context pushing t = 0State space Fig. 7 :LLMs can in-context react to sparse reward signalsonline to encourage an end effector to reach a desired goal.Online human-guided trajectory optimization.LLMs can also react to sparse binary reward sig-nals (e.g., subjectively provided by a human) toadjust trajectories online. This is analogous to animplementation of “clicker training” [ 81,82] usedfor training dogs, but instead applied to robots. Inthis setup, at every time step (2s), the robot exe-cutes an action corresponding to a movement of itsend-effector in a particular direction. The humanobserves the action and chooses whether to give areward (i.e., by using the clicker) to encourage or discourage similar behaviors. Episodes reset after 30seconds, and the first two episodes are generated by random exploration. The ( reward ,state,action ) tuplesare added as in-context examples (with negative examples followed by positives, and an equal number ofeach) to generate the next action based on the current state. An example context format is given in theAppendix. As shown in Fig. 7, applying LLMs’ sequence improvement capabilities in this way enables ahuman to guide the robot to push an object.7 DiscussionWe are excited about the opportunities of LLMs as pattern machines for robotics—from reasoning andextrapolating complex patterns as a prior for control, to online optimization of closed-loop policies viasequence improvement. These capabilities present several implications, including (i) perspectives on therole of language pre-training for end-to-end robot learning models [ 31,32], and (ii) in-context learning ofarbitrary patterns as a driving mechanism for policy improvement. LLMs also show promise for mixedautonomy settings—e.g., real-time pattern extrapolation for assistive teleoperation. We expect many of theseabilities to continue improving as large models expand from learning patterns within language-only datasetsto multimodal domains (e.g., images, videos). While this work investigates in-context generalization onfairly simple settings without additional data collection or model training, these capabilities presumablymay be significantly improved via domain-specific objectives and finetuning [83, 84, 64, 65, 42].Limitations & Future Work. Today, the inference costs (and monetary costs) of using LLMs in the controlloop are quite high. Predicting the next token for every sequence, e.g., every dimension of every time stepin a trajectory, involves querying an LLM. State-action spaces which are higher dimensional and/or greaterprecision also result in longer representations, and thereby the extent to which they can be extrapolated orsequence optimized is bounded by the context length of models. These limitations may prevent deployingthese models on more complex tasks in practice; however, they may be partially mitigated by incorporatingmechanisms like external memory, and by current efforts to drive improvements in LLM quantization [ 85]and inference efficiency [ 86]. An additional limitation lies in the fact that, for best performance, somecare must be taken to represent patterns with consistent tokenization (which requires knowledge of themodel’s tokenization scheme). Finally, as with any other language-only model, LLM-based control may (i)be unpredictable, and (ii) lack visual/physical grounding; thus, it is not currently suitable for applicationoutside of constrained lab settings. We leave the exploration of these important topics for future work.8AcknowledgmentsThe authors would like to acknowledge Jie Tan, Peng Xu, Carolina Parada, Alexander Herzog, Jensen Gao,Joey Hejna, and Megha Srivastava for valuable feedback and discussions.References[1]J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicitsreasoning in large language models. In Advances in Neural Information Processing Systems (NeurIPS) , 2022.[2]T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shot reasoners. InAdvances in Neural Information Processing Systems (NeurIPS) , 2022.[3]M. Suzgun, N. Scales, N. Sch ̈arli, S. Gehrmann, Y . Tay, H. W. Chung, A. Chowdhery, Q. V . Le, E. H. Chi,D. Zhou, et al. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. arXiv:2210.09261 ,2022.[4]A. Creswell, M. Shanahan, and I. Higgins. Selection-Inference: Exploiting large language models for interpretablelogical reasoning. arXiv:2205.09712 , 2022.[5]A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V . Ramasesh, A. Slone, C. Anil, I. Schlag,T. Gutman-Solo, et al. Solving quantitative reasoning problems with language models. In Advances in NeuralInformation Processing Systems (NeurIPS) , 2022.[6]M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman,A. Herzog, et al. Do as I can, not as I say: Grounding language in robotic affordances. In Conference on RobotLearning (CoRL) , 2022.[7]W. Huang, P . Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionableknowledge for embodied agents. arXiv:2201.07207 , 2022.[8]Y . Xie, C. Y u, T. Zhu, J. Bai, Z. Gong, and H. Soh. Translating natural language to planning goals withlarge-language models. arXiv:2302.05128 , 2023.[9]Y . Ding, X. Zhang, C. Paxton, and S. Zhang. Task and motion planning with large language models for objectrearrangement. arXiv:2303.06247 , 2023.[10] B. Liu, Y . Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P . Stone. LLM+P: Empowering large languagemodels with optimal planning proficiency. arXiv:2304.11477 , 2023.[11] E. Zelikman, Q. Huang, G. Poesia, N. D. Goodman, and N. Haber. Parsel: A (de-) compositional framework foralgorithmic reasoning with language models. arXiv:2212.10561 , 2023.[12] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2Motion: From natural language instructions tofeasible plans. arXiv:2303.12153 , 2023.[13] J. Liang, W. Huang, F. Xia, P . Xu, K. Hausman, B. Ichter, P . Florence, and A. Zeng. Code as Policies: Languagemodel programs for embodied control. In International Conference on Robotics and Automation (ICRA) , 2023.[14] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg. ProgPrompt:Generating Situated Robot Task Plans using Large Language Models. In International Conference on Roboticsand Automation (ICRA) , 2023.[15] M. Kwon, S. M. Xie, K. Bullard, and D. Sadigh. Reward Design with Language Models. In InternationalConference on Learning Representations (ICLR) , 2023.[16] H. Hu and D. Sadigh. Language Instructed Reinforcement Learning for Human-AI Coordination. In InternationalConference on Machine Learning (ICML) , 2023.[17] J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser. TidyBot:Personalized Robot Assistance with Large Language Models. In International Conference on Intelligent Robotsand Systems (IROS) , 2023.[18] J. Liu, D. Shen, Y . Zhang, B. Dolan, L. Carin, and W. Chen. What Makes Good In-Context Examples for GPT-3.InProceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction andIntegration for Deep Learning Architectures , 2021.[19] Z. Zhao, E. Wallace, S. Feng, D. Klein, and S. Singh. Calibrate before use: Improving few-shot performance oflanguage models. In International Conference on Machine Learning (ICML) , 2021.9[20] F. Chollet. On the measure of intelligence. arXiv:1911.01547 , 2019.[21] S. Ferr ́e. First Steps of an Approach to the ARC Challenge based on Descriptive Grid Models and the MinimumDescription Length Principle. arXiv:2112.00848 , 2021.[22] Y . Xu, E. B. Khalil, and S. Sanner. Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus.InAAAI Conference on Artificial Intelligence , 2022.[23] J. Ainooson, D. Sanyal, J. P . Michelson, Y . Y ang, and M. Kunda. An approach for solving tasks on the AbstractReasoning Corpus. arXiv:2302.09425 , 2023.[24] S. Alford. A Neurosymbolic Approach to Abstraction and Reasoning . PhD thesis, Massachusetts Institute ofTechnology, 2021.[25] R. Assouel, P . Rodriguez, P . Taslakian, D. V azquez, and Y . Bengio. Object-centric Compositional Imagination forVisual Abstract Reasoning. In ICLR Workshop on the Elements of Reasoning: Objects, Structure and Causality ,2022.[26] A. Moskvichev, V . V . Odouard, and M. Mitchell. The ConceptARC Benchmark: Evaluating Understanding andGeneralization in the ARC Domain. arXiv:2305.07141 , 2023.[27] V . Kolev, B. Georgiev, and S. Penkov. Neural abstract reasoner. In 4th Knowledge Representation and ReasoningMeets Machine Learning Workshop (KR2ML) at NeurIPS , 2020.[28] T. Paparaju. ARC Competition : EDA + PyTorch CNN. https://www.kaggle.com/code/tarunpaparaju/arc-competition-eda-pytorch-cnn, 2022. Accessed: 2023-05-30.[29] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Rethinking the Roleof Demonstrations: What Makes In-Context Learning Work? In Conference on Empirical Methods in NaturalLanguage Processing , 2022.[30] J. Pan, T. Gao, H. Chen, and D. Chen. What In-Context Learning “Learns” In-Context: Disentangling TaskRecognition and Task Learning. In Findings of the Association for Computational Linguistics , 2023.[31] K. Lu, A. Grover, P . Abbeel, and I. Mordatch. Pretrained transformers as universal computation engines. In AAAIConference on Artificial Intelligence , 2022.[32] M. Reid, Y . Yamada, and S. S. Gu. Can wikipedia help offline reinforcement learning? In InternationalConference on Learning Representations (ICLR) , 2023.[33] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog,J. Hsu, et al. RT-1: Robotics transformer for real-world control at scale. In Proceedings of Robotics: Science andSystems (RSS) , 2022.[34] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut,E. Brunskill, et al. On the opportunities and risks of foundation models. arXiv:2108.07258 , 2021.[35] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3M: A universal visual representation for robotmanipulation. In Conference on Robot Learning (CoRL) , 2022.[36] I. Radosavovic, T. Xiao, S. James, P . Abbeel, J. Malik, and T. Darrell. Real-world robot learning with maskedvisual pre-training. In Conference on Robot Learning (CoRL) , 2023.[37] S. Karamcheti, S. Nair, A. S. Chen, T. Kollar, C. Finn, D. Sadigh, and P . Liang. Language-Driven RepresentationLearning for Robotics. In Proceedings of Robotics: Science and Systems (RSS) , 2023.[38] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models are unsupervisedmultitask learners. OpenAI blog , 1(8):9, 2019.[39] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P . Dhariwal, A. Neelakantan, P . Shyam, G. Sastry,A. Askell, et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems(NeurIPS) , 2020.[40] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P . J. Liu. Exploring thelimits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research(JMLR) , 21(1):5485–5551, 2020.[41] D. Hendrycks, X. Liu, E. Wallace, A. Dziedzic, R. Krishnan, and D. Song. Pretrained transformers improveout-of-distribution robustness. In Annual Meeting of the Association for Computational Linguistics , 2020.10[42] T. Dinh, Y . Zeng, R. Zhang, Z. Lin, M. Gira, S. Rajput, J.-y. Sohn, D. Papailiopoulos, and K. Lee. LIFT:Language-interfaced fine-tuning for non-language machine learning tasks. In Advances in Neural InformationProcessing Systems (NeurIPS) , 2022.[43] S. Chan, A. Santoro, A. Lampinen, J. Wang, A. Singh, P . Richemond, J. McClelland, and F. Hill. Datadistributional properties drive emergent in-context learning in transformers. In Advances in Neural InformationProcessing Systems (NeurIPS) , 2022.[44] S. Garg, D. Tsipras, P . S. Liang, and G. V aliant. What can transformers learn in-context? a case study of simplefunction classes. In Advances in Neural Information Processing Systems (NeurIPS) , 2022.[45] E. Aky ̈urek, D. Schuurmans, J. Andreas, T. Ma, and D. Zhou. What learning algorithm is in-context learning?investigations with linear models. In International Conference on Learning Representations (ICLR) , 2022.[46] J. V on Oswald, E. Niklasson, E. Randazzo, J. Sacramento, A. Mordvintsev, A. Zhmoginov, and M. Vladymyrov.Transformers learn in-context by gradient descent. In International Conference on Machine Learning (ICML) ,2023.[47] L. Kirsch, J. Harrison, J. Sohl-Dickstein, and L. Metz. General-purpose in-context learning by meta-learningtransformers. In Workshop on Meta-Learning at NeurIPS , 2022.[48] S. M. Xie, A. Raghunathan, P . Liang, and T. Ma. An explanation of in-context learning as implicit bayesianinference. In International Conference on Learning Representations (ICLR) , 2022.[49] X. Wang, W. Zhu, and W. Y . Wang. Large language models are implicitly topic models: Explaining and findinggood demonstrations for in-context learning. arXiv:2301.11916 , 2023.[50] S. C. Chan, I. Dasgupta, J. Kim, D. Kumaran, A. K. Lampinen, and F. Hill. Transformers generalize differentlyfrom information stored in context vs in weights. arXiv:2210.05675 , 2022.[51] R. N. Shepard and J.-J. Chang. Stimulus generalization in the learning of classifications. Journal of ExperimentalPsychology , 65(1):94, 1963.[52] F. G. Ashby and J. T. Townsend. V arieties of perceptual independence. Psychological Review , 93(2):154, 1986.[53] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P . Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray,et al. Training language models to follow instructions with human feedback. In Advances in Neural InformationProcessing Systems (NeurIPS) , 2022.[54] M. Chen, J. Tworek, H. Jun, Q. Y uan, H. P . d. O. Pinto, J. Kaplan, H. Edwards, Y . Burda, N. Joseph, G. Brockman,et al. Evaluating large language models trained on code. arXiv:2107.03374 , 2021.[55] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P . Barham, H. W. Chung, C. Sutton,S. Gehrmann, et al. PaLM: Scaling language modeling with pathways. arXiv:2204.02311 , 2022.[56] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P . Bailey, Z. Chen, et al.PaLM 2 Technical Report. arXiv:2305.10403 , 2023.[57] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P . Florence, A. Zeng, J. Tompson, I. Mordatch, Y . Chebotar,P . Sermanet, N. Brown, T. Jackson, L. Luu, S. Levine, K. Hausman, and B. Ichter. Inner Monologue: EmbodiedReasoning through Planning with Language Models. In Conference on Robot Learning (CoRL) , 2022.[58] A. Zeng, A. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. Ryoo, V . Sindhwani, J. Lee,V . V anhoucke, et al. Socratic Models: Composing zero-shot multimodal reasoning with language. In InternationalConference on Learning Representations (ICLR) , 2023.[59] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y . Su. LLM-Planner: Few-shot groundedplanning for embodied agents with large language models. arXiv:2212.04088 , 2022.[60] W. Huang, F. Xia, D. Shah, D. Driess, A. Zeng, Y . Lu, P . Florence, I. Mordatch, S. Levine, K. Hausman, et al.Grounded Decoding: Guiding text generation with grounded models for robot control. arXiv:2303.00855 , 2023.[61] G. Wang, Y . Xie, Y . Jiang, A. Mandlekar, C. Xiao, Y . Zhu, L. Fan, and A. Anandkumar. V oyager: An Open-EndedEmbodied Agent with Large Language Models. arXiv:2305.16291 , 2023.[62] M. Laskin, L. Wang, J. Oh, E. Parisotto, S. Spencer, R. Steigerwald, D. Strouse, S. Hansen, A. Filos, E. Brooks,et al. In-context reinforcement learning with algorithm distillation. In International Conference on LearningRepresentations (ICLR) , 2023.11[63] M. Xu, Y . Shen, S. Zhang, Y . Lu, D. Zhao, J. Tenenbaum, and C. Gan. Prompting decision transformer forfew-shot policy generalization. In International Conference on Machine Learning (ICML) , 2022.[64] Y . Zhang, D. Huang, B. Liu, S. Tang, Y . Lu, L. Chen, L. Bai, Q. Chu, N. Y u, and W. Ouyang. MotionGPT:Finetuned LLMs are General-Purpose Motion Generators. arXiv:2306.10900 , 2023.[65] J. N. Lee, A. Xie, A. Pacchiano, Y . Chandak, C. Finn, O. Nachum, and E. Brunskill. Supervised Pretraining CanLearn In-Context Reinforcemenet Learning. arXiv:2306.14892 , 2023.[66] T. Webb, K. J. Holyoak, and H. Lu. Emergent analogical reasoning in large language models. Nature HumanBehaviour , pages 1–16, 2023.[67] E. Brooks, L. Walls, R. L. Lewis, and S. Singh. In-Context Policy Iteration. arXiv:2210.03821 , 2022.[68] A. V aswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attentionis all you need. In Advances in Neural Information Processing Systems (NeurIPS) , 2017.[69] D. Zhou, N. Sch ̈arli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le, and E. Chi.Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. In International Conferenceon Learning Representations (ICLR) , 2023.[70] Abstraction and Rasoning Challenge 1st place solution. https://www.kaggle.com/competitions/abstraction-and-reasoning-challenge/discussion/154597, 2020.[71] J. Su, Y . Lu, S. Pan, A. Murtadha, B. Wen, and Y . Liu. RoFormer: Enhanced Transformer with Rotary PositionEmbedding. arXiv:2104.09864 , 2021.[72] Y . Xu, W. Li, P . V aezipoor, S. Sanner, and E. B. Khalil. Llms and the abstraction and reasoning corpus: Successes,failures, and the importance of object-based representations. arXiv:2305.18354 , 2023.[73] R. Wang, E. Zelikman, G. Poesia, Y . Pu, N. Haber, and N. D. Goodman. Hypothesis search: Inductive reasoningwith language models. arXiv:2309.05660 , 2023.[74] A. Srivastava, A. Rastogi, A. Rao, A. A. M. Shoeb, A. Abid, A. Fisch, A. R. Brown, A. Santoro, A. Gupta,A. Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of languagemodels. Transactions on Machine Learning Research (TMLR) , 2022.[75] R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In AnnualMeeting of the Association for Computational Linguistics , 2015.[76] T. Kudo and J. Richardson. SentencePiece: A simple and language independent subword tokenizer anddetokenizer for neural text processing. In Conference on Empirical Methods in Natural Language Processing ,2018.[77] D. Hupkes, V . Dankers, M. Mul, and E. Bruni. Compositionality decomposed: How do neural networksgeneralise? Journal of Artificial Intelligence Research (JAIR) , 2020.[78] Z. Allen-Zhu and Y . Li. Physics of Language Models: Part 1, Context-Free Grammar. arXiv:2305.13673 , 2023.[79] E. Jang. Just Ask for Generalization. In https://evjang.com/2021/10/23/generalization.html , 2022.[80] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P . Abbeel, A. Srinivas, and I. Mordatch. Decisiontransformer: Reinforcement learning via sequence modeling. In Advances in Neural Information ProcessingSystems (NeurIPS) , 2021.[81] F. Kaplan, P .-Y . Oudeyer, E. Kubinyi, and A. Mikl ́osi. Robotic clicker training. Robotics and AutonomousSystems , 38(3-4):197–206, 2002.[82] C. Chiandetti, S. Avella, E. Fongaro, and F. Cerri. Can clicker training facilitate conditioning in dogs? AppliedAnimal Behaviour Science , 184:109–116, 2016.[83] S. Li, X. Puig, C. Paxton, Y . Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Aky ̈urek, A. Anandkumar, et al.Pre-trained language models for interactive decision-making. In Advances in Neural Information ProcessingSystems (NeurIPS) , 2022.[84] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Y u,et al. PaLM-E: An embodied multimodal language model. arXiv:2303.03378 , 2023.[85] O. Zafrir, G. Boudoukh, P . Izsak, and M. Wasserblat. Q8bert: Quantized 8bit bert. In Fifth Workshop on EnergyEfficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS) , pages 36–39. IEEE, 2019.12[86] T. Dao, D. Fu, S. Ermon, A. Rudra, and C. R ́e. Flashattention: Fast and memory-efficient exact attention withio-awareness. In Advances in Neural Information Processing Systems (NeurIPS) , 2022.[87] F. Paischer, T. Adler, V . Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, and S. Hochreiter.History compression via language models in reinforcement learning. In International Conference on MachineLearning (ICML) , 2022.[88] K. Ellis, C. Wong, M. Nye, M. Sabl ́e-Meyer, L. Morales, L. Hewitt, L. Cary, A. Solar-Lezama, and J. B.Tenenbaum. DreamCoder: Bootstrapping Inductive Program Synthesis with Wake-Sleep Library Learning. InACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI) , 2021.[89] K. Ellis, L. Morales, M. Sabl ́e-Meyer, A. Solar-Lezama, and J. Tenenbaum. Learning libraries of subroutines forneurally–guided bayesian program induction. In Advances in Neural Information Processing Systems (NeurIPS) ,2022.[90] M. F. Cusumano-Towner, F. A. Saad, A. K. Lew, and V . K. Mansinghka. Gen: A General-Purpose Proba-bilistic Programming System with Programmable Inference. In ACM SIGPLAN International Conference onProgramming Language Design and Implementation (PLDI) , 2019.13A Sequence TransformationA.1 Abstraction and Reasoning Corpus: Additional Details and ExamplesIn Section 4 of the main paper, we describe how ARC problems require reasoning about a range of differenttypes of pattern operations—infilling, counting, translating and rotating shapes, and more. In Fig. 8, weshow sample problems among the 800 ARC problems for which text-davinci-003 correctly generalizesthe pattern shown in a few train examples to a test example. In Fig. 9, we show sample problems that arenot correctly solved by text-davinci-003. In Listing 1, we show an example context for an ARC problemencoded as integers.Input Output Input Output Input Output Input Output Input OutputTrain ExamplesTest ExampleFig. 8 :Sample ARC problems that are correctly solved by text-davinci-003.Input Output Input Output Input Output Input Output Input OutputTrain ExamplesTest ExampleFig. 9 :Sample ARC problems that are not correctly solved by text-davinci-003.input :0, 0, 0, 00, 3, 4, 00, 7, 6, 00, 0, 0, 0output :3, 0, 0, 40, 0, 0, 40, 0, 0, 00, 0, 0, 07, 0, 0, 6---input :0, 0, 0, 00, 5, 6, 0140, 8, 3, 00, 0, 0, 0output :Listing 1: Example context format for an ARC problem (only one input-output example is shown, along with a queryinput.A.2 Patterns over Low-Resolution ImagesIn Fig. 10, we show an example in-context grasp detector which outputs target coordinates in a downsampledimage, given 6 in-context examples, as well as an example of a simple forward dynamics model predictingspatial rearrangement of a red bowl into a green plate, given 9 in-context examples. As LLMs progress onthe benchmarks discussed in Section 4, they may become more robust and precise at performing such tasks.input: 123 61 62 93 146 92 67 67 92 93 ... 124 87 62 62 86 91 86 86 87 92 123 43 44 43 87 87 91 61 87 87 123 69 44 68 112 112 92 92 93 93 118 123 93 118 117 118 87 92 93 93 output: 3 6 input: 63 47 47 63 77 77 61 57 58 62 ... 63 42 41 42 42 42 37 37 37 42 63 46 46 46 46 46 37 37 41 42 63 62 62 62 62 62 62 62 58 42 63 63 62 62 62 62 62 62 62 62 output: 63 47 47 63 77 77 61 57 58 62 ... 63 37 37 42 42 42 42 42 42 42 63 53 53 57 46 42 42 42 42 42 63 58 58 62 46 62 62 62 46 42 63 63 63 63 62 62 62 62 62 62 InputInput (Low-Res)Input & Output (Tokens)Output (Rendered)Fig. 10 :Example LLM prediction as an in-context grasp detector (top) and a simple forward dynamics model (bottom).A.3 Token Invariance for New Token EmbeddingsIn Section 4, we have argued that LLMs are, to a certain extent, invariant to the choice of alphabet a patternis encoded with, in line with prior work on mappings from semantically meaningful tokens to randomtokens in a pre-trained language model [ 29,30,87]. Here, we present an experiment that investigatestoken invariance even further by introducing new token embedding vectors the model has not seen duringtraining.We sample Kmany new embedding vectors as Gaussian samples using the mean and 1 or 2 standarddeviations of the original LLM embedding matrix statistics in each of embedding vector dimension. Thisway, we create a new token embedding matrix, that mainly consists of the newly sampled token embeddingsthe model has not seen during training. Additionally, we add embeddings from the original matrix thatcorrespond to separating tokens (comma, period) to build the input prompts. Although the model hasnever seen the new embedding vectors during training, we can feed them into the transformer as input andcompute cosine similarities at the output analogously to how the original embedding matrix is treated.Fig. 11 shows the success rate of correctly choosing the target token in a single-token prediction task whenusing the newly sampled embeddings in comparison with the native embedding matrix. The tasks we areconsidering are of the form (1, 1, 2) ↦→2or(1, 2, 2) ↦→1. We provide in-context examplesto build a prompt of the form “ 1, 2, 2, 1 \n 3, 4, 4, 3 \n 5, 6, 6, 5 \n 7, 8, 8, ” wherethe correct prediction should be “ 7” in this example. Note that the numbers “ 1”, “2” etc. are randomlymapped to the newly sampled token embeddings for indexing purposes and in particular do not enter theLLM. As one can see in Fig. 11, for 1σnoise sampling, the model is able to solve the task with the newembeddings with similar performance as with the native embeddings. In case of 2σ, the performancedegrades. Although these are relatively simple single-token prediction tasks, this experiment shows thatLLMs show pattern recognition abilities even when prompted with out-of-distribution continuous inputembeddings. The results are obtained with K=100 , averaged over 3 random seeds when sampling thetoken embeddings, 30 instances each, and a context length of 5, 10, or 20 examples. The LLM is the 8Bparameter variant of [55].15Fig. 11 :Token-invariance experiment with newly sampled token embeddings the model has not seen during training.Shown are success rates when using randomly sampled token embeddings from the native embedding matrix, or newlysampled embeddings.A.4 PCFG Benchmark: Additional Details and AblationsOur PCFG benchmark is a procedurally generated, adjustable-difficulty benchmark for measuring abstractsequence transformation capabilities in LLMs, based on the PCFG from [ 77]. In Table 3, we showillustrations of the primitive operations in the PCFG that can be applied on one or two sequences oftokens. In Table 4, we show examples of two transformations (of different complexities) from ourbenchmark, which are composed of the primitive operations. In Table 5, we show independent ablations ofsequence length (number of tokens) kand complexity (number of rules) win the sequence transformations,illustrating the way in which the solve rate decreases as either factor increases. In Listing 2, we show anexample context for a PCFG problem on integer sequences.copyInputOutputreverseshiftrepeatechoswapUnary FunctionsBinary Functionsappendprependremove_firstremove_secondInputOutputNameNameTab. 3 :Illustrations depicting the unary and binary operators from Hupkes et al. 2020, which we use for our PCFGbenchmark.6 7 7 8 1 5 9 8 9, 1 5 9 8 9 7 7 6 6; 4 3 0 3 5 0 2 3 8; 5 0 2 3 8 3 3 44; 1 3 3 3 7 0 1 9 9,Listing 2: Example context format for a PCFG problem (two input-output examples are shown, along with a queryinput).16remove_second(swap(, ), )s1s2s35305376167Example InputsExample Outputsecho(copy(swap(swap( prepend(remove_second( swap(echo()), ), )s1s2s3s4s5s6s7s8s9s10677815989159897766430350238502383344FunctionTab. 4 :Illustrations of transformations in our PCFG benchmark. Row 1 shows a transformation composed of w=2operations over k=3tokens, and row 2 shows a transformation composed of w=8operations over k=10 tokens,respectively. For each transformation function, we show two example inputs and the corresponding outputs.text-davinci-003wk0 1 3 7 15 311 100 - - - - -2 100 100 - - - -4 100 100 100 - - -8 100 99 95 92 - -16 100 86 59 4 47 -32 100 74 32 14 12 22text-davinci-003 w/ random Awk0 1 3 7 15 311 92 - - - - -2 91 92 - - - -4 93 92 93 - - -8 88 82 62 49 - -16 84 64 32 17 22 -32 83 40 13 8 9 12PaLMwk0 1 3 7 15 311 100 - - - - -2 100 100 - - - -4 100 100 100 - - -8 100 89 74 82 - -16 100 78 57 51 58 -32 100 68 23 18 22 34Tab. 5 :Solve rate (%) for PCFG across number of tokens kand number of rules wfor different models.A.5 PCFG Benchmark: Program SynthesisWe have run DreamCoder [ 88] on our PCFG benchmark to contextualize the hardness of the task, andpresent the results in Table 6. We ran DreamCoder with two different sets of initial primitives:•PCFG Ops. In this version, we provide DreamCoder with an initial set of primitives that correspondsto the exact set of unary and binary functions (from [ 77]) that the PCFG benchmark is based on: copy ,reverse ,shift ,swap ,repeat ,echo ,append ,prepend ,remove first ,remove second . We alsoinclude a slicing operator slice ,length , and integers 1–10.•List Ops. In this version, we provide DreamCoder with a set of list primitives: length ,empty ,singleton ,range ,append ,map,reduce ,true ,not,and,or,sort ,add,negate ,equal ,reverse ,index ,filter ,slice . These primitives are not specially designed for PCFG and are based on those used in [88, 89].In both cases, the provided primitives are sufficient to define the transformations in the PCFG benchmark.For each (k,w)pair in Table 6, we train on 100 task instances and report the number of tasks which getsolved (i.e. a correct program that satisfies the training examples is discovered). We ran DreamCoderfor 4 iterations and use the default hyperparameters and timeout. As we would expect, the version ofDreamCoder with “oracle” access to the PCFG operations performs well, in several cases matching orexceeding the performance of LLMs. This is especially true when the search problem is easier (i.e. whennumber of functions wis smaller). The version of DreamCoder with access to list primitives is also able tosolve many of the tasks with small values of w, but there is a sizeable dropoff as the complexity of the tasksincreases. These results help to contextualize the difficulty of the PCFG benchmark when given access todifferent amounts of domain-specific information. We also note that we would expect brute-force searchover the PCFG operators to eventually solve these tasks. Doubling the computation time budget for theversion with oracle access to the PCFG operators leads to increased success rates ( k=8,w=3) increasesfrom 80 →84; (k=16,w=7) increases from 33 →41.17DreamCoder w/ PCFG Ops.wk0 1 3 7 15 311 100 - - - - -2 100 100 - - - -4 100 100 98 - - -8 100 100 80 58 - -16 100 98 64 38 38 -32 100 96 41 24 26 37DreamCoder w/ List Ops.wk0 1 3 7 15 311 100 - - - - -2 100 100 - - - -4 100 85 51 - - -8 100 81 23 25 - -16 100 63 26 4 17 -32 100 66 6 3 5 11Tab. 6 :Solve rate (%) for PCFG across number of tokens kand number of rules wfor DreamCoder initialized withtwo different sets of primitives.B Sequence CompletionB.1 Sinusoids: Structure Learning ComparisonWhile the sinusoid extrapolation could be easily performed with standard regression techniques, wecontextualize the task with another method that has no specific prior knowledge of the function beingextrapolated. We include the structure learning baseline from [ 90], implemented with Gen [ 90]. Thismethod uses a Gaussian Process with an inferred covariance kernel to model time series data. Thecovariance kernel is inferred using MCMC sampling over a PCFG of covariance functions (e.g. squaredexponential, periodic). We run the algorithm for 100 iterations and sample from the resulting GP , as shownbelow in Fig. 12. The training data is generally fit with low error. However, the quality of the completiondiffers for the various functions; the sine wave is generally extrapolated well whereas the sinusoids yieldhigh variance samples. Similar to the LLMs, greater context generally yields lower error completions. Notehowever that the outputs of the structure learning algorithm are high-variance by design, and there aremultiple ways to utilize the outputs of the algorithm. We also note that [ 90] was tested on a larger set offunctions than those we look at here. Though not the goal of our work, it would be interesting future workto evaluate how LLMs extrapolate patterns generated by a wider array of function classes. We also refer to[42] for an extensive comparison of language models to baselines on regression tasks when formulatedwith a natural language interface as well as a study on the effects of fine-tuning.Fig. 12 : The structure learning approach (g) extrapolates various functions y=a·sin(bx)(top row),y=ax·sin(bx)(middle row), and y=a2xsin(bx)(bottom row) with different degrees of error. Morecontext also generally helps prediction accuracy (light vs. dark).B.2 Table Sweeping: Additional DetailsIn Section 5 of the main paper, we demonstrate how sequence completion capabilities can be applied tocontinuation of partial motions, such as sweeping a table. In Fig. 13, we show the average DTW distancebetween predicted and ground truth trajectory completions in the Table Sweeping task, given 66% ofthe trajectory as context, over 30 trials. Each full trajectory consists of 9 sweeping motions across atable. We compare completions made by various language models. We find that larger models generally18Fig. 13 :LLM trajectory predictions Table Sweeping improve with larger models.perform better; text-davinci-003 performs the best, and also has the lowest variance. On our website, weshow qualitative examples of text-davinci-003 completing a table sweeping motion given by a humandemonstration.B.3 Whiteboard Drawing: Qualitative ResultsIn Fig. 14, we show example completions for three different loop styles by text-davinci-003 over three trials.The completions generally match the overall shape shown in the two loops given as context. However, theresults also qualitatively illustrate that fine motion patterns can be challenging to predict precisely.Fig. 14 :Sampled drawings produced by performing an in-context completion (of one loop, highlighted in green) givena scripted demonstration of two loops. Each row is a different loop style (narrow, medium, wide), and each column is adifferent trial. Results are shown for text-davinci-003.C Sequence ImprovementC.1 Marker in Cup: Additional DetailsIn this task, we use LLMs to generate improved trajectories (according to a reward metric) given a contextof trajectories that have increasing returns. For this task, states are Cartesian ( x,y,z) positions, with eachdimension normalized between 0 and 200, trajectories are series of states that can be executed via positioncontrol, and the return of a trajectory is proportional to the negative distance to the goal (cup) plus an offset.We form the trajectories in the context as follows: we take a full trajectory which attains a reward of 100and construct trajectories that stop moving 20%, 40%, 60%, and 80% of the way to the goal (such that alltrajectories are 50 timesteps). We condition the LLM to generate a 100-reward trajectory by prompting itwith “100: start state ”. An excerpt of an example context is shown in Listing 3. The results in Figure 5from the main paper are over 11 trials, each with a different goal position.71: 104 83 123 , 104 83 123 , ...72: 104 83 123 , 104 83 123 , ...80: 104 83 123 , 104 83 123 , ...1990: 104 83 123 , 104 83 123 , 104 83 123 , 104 83 123 , 104 83 123 , 104 83123 , 104 83 123 , 104 83 123 , 104 83 123 , 104 83 123 , 104 83 123 , 10483 123 , 104 83 123 , 104 83 123 , 104 83 123 , 105 83 123 , 105 83 123 ,106 83 123 , 106 83 123 , 107 83 123 , 108 83 122 , 109 83 122 , 110 83122 , 111 83 121 , 112 82 120 , 113 82 119 , 113 82 118 , 114 81 118 , 11581 117 , 115 81 116 , 115 80 115 , 116 80 114 , 116 80 113 , 117 79 112 ,117 79 111 , 118 79 110 , 118 78 109 , 118 78 109 , 118 78 109 , 118 78109 , 118 78 109 , 118 78 109 , 118 78 109 , 118 78 109 , 118 78 109 , 11878 109 , 118 78 109 , 118 78 109 , 118 78 109 , 118 78 109100: 104 83 123Listing 3: Example context (excerpt) for a Marker in Cup, illustrating the ( reward :state,state,state...) format..C.2 Grid: Additional DetailsIn the Grid environment, observations are x,ypositions represented by integers 0–8 for each coordinate.There are five possible actions (1, 2, 3, 4, 5) corresponding to (right, up, left, down) movement by one spaceand no-op. A goal is randomly placed in the grid. The agent (which is initialized at the center position)receives a reward of 100 - 10 * distance from the goal to the agent’s final position. Episodes terminateafter 20 time steps. For our experiments, we limit the context length to 1024 tokens. At each iteration, theLLMs is prompted to generate a trajectory with the maximum seen return from the buffer plus a randomlyselected offset of up to 20.C.3 CartPole: Additional DetailsWe use a simplified version of the CartPole enviornment in OpenAI Gym. Observations are two-dimensional(corresponding to pole angle and velocity, normalized to 0-100) and the maximum time horizon is 200.There are two possible actions (1, 2) corresponding to (left, right), and the agent gets +1 reward for everytime step that the CartPole is kept upright. In Listing 4, we show an example context excerpt for CartPole,where a trajectory history is appended with an encoding of the current trajectory.52: 40 50 , 1, 40 54 , 2, 41 49 , 1, 41 54 , 1, ...60: 45 50 , 2, 45 45 , 1, 44 50 , 2, 44 45 , 1, ...75: 52 50 , 1, 52 55 , 2, 53 50 , 2, 53 46 , 2, ...98: 44 50 , 1, 44 55 , 2, 45 50 ,Listing 4: Example context format for a CartPole run. A trajectory history (with each trajectory in the format reward :observation ,action ,observation ,action ...) is followed by an encoding of the current trajectory, up to the currentobservation.Below, we discuss some additional considerations for forming the context from the trajectory history.Context Length . When context length is longer, more trajectories can fit in the context (which yields morein-context “training data” that could potentially be used to generalize to higher rewards, but also requiresthe LLM to attend over more tokens). Context length is a limiting factor of using current LLMs in ourtrajectory improvement setting: the number of tokens required to represent a trajectory history scales withthe observation dimensionality, action dimensionality, time horizon, and number of trajectories. For ourCartPole experiments, we limit the context to 1024 tokens (which is the maximum context length fortext-ada-001, text-babbage-001, and text-curie-001 models).Action Representation . In initial experiments, we found that the tokens used to represent the action space(e.g. “0” for left, “1” for right) can seemingly affect the ability of an LLM to improve trajectories in theonline setting. For example, we observed that if “0” is included in the action space, LLMs may “default”to sampling “0” (likely due to token-specific priors). Therefore, for our experiments, we use 1-indexedinteger action representations, which appears to alleviate the bias towards choosing a particular action. Thefact that action representation can sometimes affect performance complements our observations in theSequence Transformation section, in which we find that token mapping invariance holds to some extent,but not entirely.C.4 Clicker Training: Additional DetailsIn our clicker training example, the observation consists of the end-effector position and the approximateobject position as determined by visual input, with the (x,y,z )values normalized between 0 and 300.20Actions correspond to movements of the end-effector (normalized between 0 and 100, such that 50,50,50represents no movement). A sample context is given in Listing 5.0: 80 ,49 ,138 ,109 ,54 ,133; 45 ,44 ,550: 82 ,32 ,155 ,109 ,54 ,133; 48 ,59 ,480: 82 ,32 ,155 ,109 ,54 ,133; 48 ,59 ,481: 88 ,31 ,154 ,109 ,54 ,133; 45 ,54 ,431: 85 ,36 ,146 ,109 ,54 ,133; 57 ,54 ,461: 93 ,40 ,142 ,109 ,54 ,133; 44 ,52 ,431: ...Listing 5: Example context format for clicker training. ( Reward ,observation ,action ) tuples are ordered by reward(with a click corresponding to a reward of 1) with an equal number of reward 0 and reward 1 transitions represented inthe context.21 |
WuBv9-IGDUA | Multi-Resolution Sensing for Real-Time Control withVision-Language ModelsSaumya Saxena∗†1, Mohit Sharma †1, Oliver Kroemer1Robotics Institute, Carnegie Mellon University1{saumyas, mohits1, okroemer }@cmu.eduAbstract: Leveraging sensing modalities across diverse spatial and temporal res-olutions can improve performance of robotic manipulation tasks. Multi-spatialresolution sensing provides hierarchical information captured at different spa-tial scales and enables both coarse and precise motions. Simultaneously multi-temporal resolution sensing enables the agent to exhibit high reactivity and real-time control. In this work, we propose a framework for learning generalizablelanguage-conditioned multi-task policies that utilize sensing at different spatialand temporal resolutions using networks of varying capacities to effectively per-form real time control of precise and reactive tasks. We leverage off-the-shelfpretrained vision-language models to operate on low-frequency global featuresalong with small non-pretrained models to adapt to high frequency local feed-back. Through extensive experiments in 3 domains (coarse, precise and dynamicmanipulation tasks), we show that our approach significantly improves ( 2×on av-erage) over recent multi-task baselines. Further, our approach generalizes well tovisual and geometric variations in target objects and to varying interaction forces.1 IntroductionPerforming robotic manipulation tasks in the real world often requires using sensing modalities atdifferent spatial resolutions . For instance, for peg-insertion, the robot can use a statically-mountedthird-person camera (low spatial resolution or global information) to reach close to the hole, usea wrist-mounted first-person camera for finer alignment, and finally use proprioception and force-feedback for insertion (high spatial resolution or local information). Additionally, each sensingmodality can be utilized at a different temporal resolution . For example, for coarse quasi-staticsubtasks (“reach hole”), using third-person camera images at a low frequency can be sufficient.However, finer reactive subtasks (“insert peg”), might require high-frequency force-torque feedback.Based on this insight, we propose a multi-resolution (spatial and temporal resolution) sensor fusionapproach for coarse quasi-static as well as precise reactive manipulation tasks.Multi-resolution sensor fusion can enable generalization to novel visual-semantic targets. For in-stance, by utilizing global information from third-person camera images only for coarse localizationand relying on local information from in-hand cameras and force-torque feedback for finer motions,the policy can learn to generalize to novel objects. Previous approaches to learning generalizablepolicies either require extensive data collection [1, 2, 3] or rely on pretrained models [4, 5, 6, 7]for policy adaptation [8]. However, such approaches typically utilize a single sensory modality,while others that incorporate multiple sensors do not prioritize generalization [9]. In our work,we avoid extensive data collection and instead leverage pretrained vision-language models in ourmulti-resolution approach to learning generalizable language-conditioned multi-task policies.Although pretrained vision or vision-language models (VLMs) provide impressive generalizationcapabilities and enable learning language-conditioned multi-task policies, using large VLMs canhave certain disadvantages. First, given their large size (e.g. Flamingo has 80B parameters [6]), they∗†equal contribution.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.actionReal-time control loopThird person viewMulti-resolution policyResNet18 with FiLMPretrained VLMPolicy headTransformerProprioception Force-torqueMLPFirst person view5 Hz20 Hz75 HzCoarse manipulationPrecise manipulationDynamic manipulationGeneralization75 Hz“Pick men’ s shoe”Color“Pick red block”“Insert in black peg”“Insert in red peg”“Instruction”Shape“Pick women’ s shoe”Interaction forcesMulti-resolution tasksFigure 1: Our proposed approach uses sensing at different spatial and temporal resolutions for real time controlof coarse, precise and dynamic tasks while enabling generalization to novel visual features and interactions.have slow inference which makes them unusable for real-time closed-loop control which is necessaryfor reactive tasks. Second, since pre-trained models are often trained on out-of-domain data, usingthem to solve in-domain manipulation tasks (especially precise tasks) may require finetuning [10].However, task-specific finetuning can make models less robust with reduced generalization [11].To overcome the above challenges of utilizing large pretrained VLMs for real-time control of reac-tive tasks, we propose a framework that incorporates different capacity networks (that operate on dif-ferent sensing modalities) at different frequencies. Specifically, we use large pretrained VLMs withslow inference at a lower frequency while small networks with fast inference at a higher frequency.Our low-frequency pretrained VLMs operate on statically mounted third-person views and can pro-vide global coarse feedback (such as approximate object locations) that is usually only needed at alow rate. On the other hand, we propose using small trained-from-scratch models with first-personcamera views and force-torque data to obtain the high-frequency fine-grained feedback necessaryto perform precise and reactive tasks. Further, to overcome the challenge of loss in generalizationwhen finetuning pre-trained VLMs, we freeze the pretrained VLMs to avoid losing their robustnessand maintain their generalization abilities. Overall main contributions include:• a framework for learning generalizable multi-task policies that incorporates multiple sen-sory modalities to capture global to local spatial information,• combine sensor modalities at different frequencies to avoid bottlenecks and enable reactivecontrol which we show empirically is essential for dynamic tasks,• comprehensive experiments across 3 domains (and 2 real-world tasks) that include coarse,precise and dynamic manipulations tasks, and• effective generalization across semantic task variations in both simulation and real-world.2 Related workVision-Language Pretrained Models for Robot Manipulation : Many prior works combine vi-sion and language for robotic tasks. While early works focus on tabula-rasa learning [12, 13, 14],more recent works, use pretrained large language models (LLMs) and show efficient learning andimproved generalization for robotics tasks [15, 16, 17, 18, 19]. Many recent works also combinelarge general-purpose pretrained vision or vision-language models (VLMs) [4, 6, 20] for manipula-tion [21, 22, 8, 10, 23, 24, 25, 26, 27]. Our work is more closely related to these latter works in thatwe also use pretrained VLMs for robot manipulation. Among these works, many works only uselanguage for task-specification and do not focus on the generalization provided by pretrained mod-els [26, 27]. Additionally, other works adapt the pretrained representation for the downstream task[24, 10, 28]. However, as we show empirically, such updates lead to representation drift and a loss ofrobustness for the pretrained general-purpose VLM. Hence, we propose not updating the pretrainedrepresentations. While [25, 8] use frozen VLMs, [25] only uses pretrained VLM as an open-worldobject detector to get pixel targets for the task at the first episode step. On the other hand, [8] uses2“Pick block”Vision Language ModelResNet18Film conditioningMLPx3x32-layer MLPSelf AttentionCross AttentionProprioception Force-torque5 Hz20 Hz75 HzMLPThird person viewFirst person viewSelf AttentionMLPCross AttentionactionsFigure 2: Overall architecture: Global low frequency information is extracted from third-person camera im-ages using slow inference networks, local high frequency information is extracted from first-person cameraimages and proprioceptive, force-torque feedback using fast inference networks. These sensing modalities arethen fused at different frequencies to enable real time high frequency control.the pretrained VLM with templated pick-and-place actions for manipulation. By contrast, we useVLMs in our multi-resolution framework with continuous feedback for reactive manipulation tasks.Multi-Spatial Resolution for Robot Manipulation: Many prior works use multiple sensor modal-ities for robot manipulation, wherein each modality operates at a different spatial resolution. Forinstance, prior works often combine visual (low spatial resolution) and proprioceptive (high spatialresolution) feedback [29, 30, 31], use wrist-mounted cameras for visual servoing [32, 33, 34] orfor contact-rich manipulation tasks [35, 36, 37, 38], while other works focus on combining visionand haptic sensing [39, 40, 41, 42]. Our work is similar to the first set of works i.e. we use boththird person and first person cameras for precise manipulation. However, unlike most prior works[35, 38] which focus on single-task settings, we focus on multi-task settings and fuse multiple sens-ing modalities at different resolutions.Multi-Temporal Resolution for Robot Manipulation: Learning reactive policies requires therobot to operate at high frequencies. Some recent works in robot manipulation focus on learningpolicies at different temporal resolutions. For instance, [43] decompose a manipulation task intodifferent phases (e.g. visual reaching phase and tactile interaction phase) and learn separate policiesfor each phase as well as a blending policy. While [44] avoid the discrete formulation of an MDPand instead learn a continuous differential equation [45, 46] to model the low resolution features. Bycontrast, we use the discrete formulation and instead of decomposing policies into different phaseswe reuse features from low-resolution signals while operating at a high temporal resolution.Dynamic Reactive Manipulation: Many prior works in robot manipulation focus on quasi-statictasks [17, 1]. However, there has been increased interest in solving tasks that are reactive anddynamic in nature [47, 48, 49]. Previous works focus on explicitly learning the dynamics [49] orusing analytical models [47, 50] of such systems for achieving reactivity. These works often assumeaccess to the ground truth object pose and are limited to a single-task setting. In our work, we learnhow to perform such dynamic and reactive tasks using visual inputs in a multi-task setting.3 Proposed ApproachIn this section, we discuss our approach for learning a generalizable language-conditioned multi-resolution multi-task policy for precise and reactive manipulation tasks. Below, we provide detailson how we utilize different sensing modalities and then delineate our training/inference and discusshow our approach enables real time control for reactive tasks while generalizing to novel tasks.3.1 Multi-Resolution ArchitectureFigure 2 shows the architecture of our multi-resolution approach. Our model takes as input multi-ple sensing modalities with different spatial resolutions, i.e., statically-mounted third-person cam-era view, first-person camera view and high frequency force-torque feedback. Each input is firstprocessed separately before being fused together at different temporal resolutions to output highfrequency robot actions. Below we expand on each component of our architecture.3Pickup BlocksInsert BlocksSquare InsertPick & Lift SmallShape SortTake USB OutMT-Coarse MT-PreciseMT-DynamicReal-WorldFigure 3: Task settings for evaluating our proposed approach. Left: Precision tasks. Middle-left : Dynamictasks. Middle-right : Coarse tasks. Right : Real world pick and insertion tasks.Low-Spatial Resolution Model: We use a low-spatial resolution sensor (third-person camera) toprovide global task information to our agent. We use pretrained visual-language models to extractthis global information from third-person views as well as to enable language-conditioning in amulti-task setting. Such pretrained models enable generalization to novel semantic features suchas new objects or novel language commands. However, to ensure the pretrained model maintainsits robustness we keep it frozen . However, using large VLMs to extract this generalizable globalinformation comes with the drawback that the inference speed is very slow ( ≈5Hz). We experimentwith two models CLIP [4] and MDETR [51] (language-conditioned DETR [52]), which use image-level and object-level information respectively.High-Spatial Resolution Model: To ensure reactivity in the face of slow inference of pretrainedVLMs, we use a smaller non-pretrained vision model (ResNet-18) [53] to process the first-personcamera view at a higher frequency ( ≈20Hz). This view provides us with high-resolution local spa-tial information. To provide appropriate task-context to the first-person view we use small FiLM lay-ers [54] for language conditioning. We train this model from scratch with augmentations (explainedin the next paragraphs) to extract local spatial features that are useful for precise tasks. While usinga small vision model enables faster processing it can still be insufficient for some highly dynamictasks. Hence, we process the force-torque feedback and proprioceptive information at a much higherfrequency ( ≈75Hz) using a small linear layer.Multi-Resolution Sensor Fusion: We combine local and global sensing information (spatial reso-lutions) mentioned above at different temporal resolutions based on the capacities of the respectivenetworks. Specifically, we reuse features (network activations) from lower frequency (third-personand first-person views) networks to match the frequency of the highest frequency (force-torque feed-back) network. Doing this ensures that the policy network outputs actions at a high frequency (equalto the frequency of the force-torque feedback network), thus enabling real-time control.In addition to temporal-sensor fusion we also spatially fuse local and global sensing information, i.e,we fuse information extracted from third-person views with first-person view information and vice-versa. We achieve this using two small camera-specific transformers together with cross-attention.Each transformer uses self-attention within each modality (for its associated camera view) and cross-attention with the other modality (other camera view). As shown in Figure 2, we readout the CLStoken from each transformer and concatenate them with the force-torque and proprioception embed-ding. This concatenated embedding is then processed using a 2-layer MLP policy head to output therobot actions. Please refer to the Appendix B for further details on the architecture.Data Augmentations: Data augmentations have been shown to be helpful for single-task learningof manipulation tasks [55, 38]. However, naively using image augmentations can be detrimental forlearning generalizable multi-task policies. This is because pixel-level augmentations, such as color-jitter, grayscale etc., can result in semantic changes in the overall scene. Such semantic changes canlead to mismatch between the input image and language instruction provided for the given task. Forinstance, a demonstration shows “move to red block” but pixel augmentations can change the redblock’s color. To avoid this while being able to utilize the benefits of augmentations we propose touse two different sets of augmentations. First, for third-person cameras we only use image-level aug-mentations (e.g. random crops, shifts). This avoids mismatch between image-and-text instructions4and allows visual-language grounding from pretrained VLM to be utilized. Second, for first-personcamera we use both image-level and pixel-level augmentations (color-jitter, grayscale). Since theseaugmentations lead to image-text mismatch this further enforces our agent to use the third-personcamera view for coarse localization, while only relying on the in-hand view for finer precise mo-tions. Using strong pixel-level augmentations on first-person view further make the in-hand modelinvariant to texture but rely more on edges and corners [56]. This, as we show empirically, improvesthe generalization performance of our model on heldout object variations.Training and Inference: We use behavior cloning from expert demonstrations to train our model.We record data from each sensor at their respective frequencies. Specifically, camera images arerecorded at 30 Hz and force-torque feedback at 250Hz. To match slower processing times of largermodels during inference we sub-sample the third-person camera images to 5Hz and first-personcamera images to 20Hz. We use AdamW [57] optimizer with learning rate 1×e−4and weightdecay 0.01. We train our model for 60 epochs, using a linear warmup, starting with learning rate 0,for 5 epochs and then decay the learning rate using a cosine-scheduler. We use a GTX-1080Ti forinference. Overall our architecture has ≈250Mparameters. The pretrained vision-language modelhas≈150Mparameters (for MDETR) with an inference time of ≈0.1seconds. The first-personcamera model has ≈25Mparameters with an inference time of 0.04seconds. Finally, the force-torque and proprioception model along with the policy head have a total of ≈250Kparameters withan inference time of ≈0.005seconds. This allows the actions to be inferred at a max frequency of≈200Hz although we use it at a reduced frequency of ≈75Hz which was sufficient for our tasks.4 Experimental SetupWe first identify the key research questions that we aim to evaluate:Q1: How does multi-spatial resolution sensing benefit learning language-conditioned multi-task(MT) manipulation polices for precise tasks? Specifically, we aim to evaluate the utility of multi-spatial resolution sensing for tasks that involve visual occlusions, partial observability, and precision.Q2: How does multi-temporal resolution sensor fusion benefit learning reactive manipulationtasks? Specifically, we evaluate how our architecture enables closed loop control for reactive tasks.Q3: How well does our approach generalize to tasks with novel visual-semantic targets? Specifi-cally, we evaluate our approach’s robustness to distribution shifts, e.g., object colors and geometries.4.1 EnvironmentsTo evaluate the above questions we use three task settings, 1) MT-Precise : Precise manipulationtasks, 2) MT-Dynamic: Dynamic manipulation tasks, and 3) MT-Coarse: Coarse table-top manip-ulation tasks. Below we detail each environment and discuss its usage to answer above questions.MT-Precise For precise manipulation we use 4 spatial precision tasks from RLBench [58] (see Fig-ure 3 (Left)) – square block insertion, pick up small objects, shape sorting, and unplug usb. We usethis task domain to answer Q1. Specifically, we evaluate the need for multi-spatial resolution sens-ing in manipulation tasks that require precise feedback and have partial observability, i.e., objectscan go out of view of the first-person camera.MT-Dynamic: We use the CMU ballbot [59] platform to perform dynamic pickup tasks in sim-ulation (Figure 3 (Middle-Right)). We choose ballbot since it is a highly dynamic robot with anomnidirectional base (ball) capable of performing fast, reactive and interactive tasks. We considerthe task of dynamically picking up an object, which requires quick reaction to contact with the objectand grasping it to prevent toppling the object over. We use this setting to answer Q2.MT-Coarse: We consider a canonical table-top manipulation setting ([60, 61]) involving coarsepick-and-place manipulation tasks with diverse objects – blocks, shoes, mugs, cylinders. We usethis environment to answer Q1andQ3. Specifically, for Q1we contrast these coarse manipulationtasks with high precision tasks to evaluate the utility of multi-spatial resolution sensing.5FrozenOursVLMResNet18MLPSA+ CAI3IhFTFinetuned2-layer MLPVLMMLPSAI3FT2-layer MLPI3-onlyLangLangVLMMLPSA+ CAIhFTMLP“Instruction”VLMRobustness baselinesTemporal resolution baselinesMention frequencies in the arrowsI3SA+ CAResNet18ResNet185 Hz5 Hz5 HzIhFT“Instruction”I320 Hz20 Hz20 HzIhFT“Instruction”Robustness baselinesI3SAMLP5 Hz75 Hz20 HzFT“Instruction”I35 Hz75 HzResNet18Ours/uni03C0low-res/uni03C0high-res/uni03C0I3-FrozenMLPMLPSA+ CAMLPMLPVLMVLMMLPFigure 4: Temporal resolution and robustness baselines used to compare our multi-resolution approach.Real-World Setup: We evaluate our approach on two real-world tasks. For precise manipulation(Q1) we use an insertion task to insert different blocks into cylindrical pegs (Figure 3 (Right top)).We also evaluate generalization abilities ( Q3) using a pickup task, wherein we use 2 train objectsand evaluate the learned policy on 8 objects with different geometry (shape, size) and visual (color,texture) features. Additional details on each environment are provided in Appendix A4.2 BaselinesWe compare our approach against recent methods which focus on learning generalizable policies inmulti-task settings. We compare against RT-1[1] which proposes a transformer based policy and alsoagainst BC-Zero [2] which uses language conditioning using FiLM [54]. However, both [1, 2] focuson coarse manipulation tasks and operate at a single-resolution (both temporal and spatial). To thebest of our knowledge no prior work focuses on a multi-resolution approach for multi-task learning.Hence, to highlight the benefit of each component of our approach and answer the questions posedin Section 4 we modify our approach along different axes and propose additional baselines below.Spatial Resolution baselines: To verify the utility of multiple spatial resolutions ( Q1) we modifyour approach and remove one sensory modality at a time. We use π−Ih,π−I3,π−FTto refer topolicies which remove first-person ( hand view), 3rd person view and force- torque respectively.Temporal Resolution baselines: To answer Q2we compare against single temporal-resolutionapproaches (Figure 4 (Left)), i.e., where all modalities (including force-torque) operate at the samefrequency. We introduce two baselines, 1) πhigh-res :small models with fast inference for bothcameras ( 20Hz), and 2) πlow-res :larger models with slow inference for both cameras ( 5Hz).Robustness baselines: We compare visual-semantic generalization ability of our approach ( Q3)against two baselines (Figure 4 (Right)): 1) πmulti-res-FT : Finetune the pretrained VLM model, 2a)πI3-Frozen : Uses only third-person camera (and force-torque) and keeps the pretrained model frozen.2b)πI3-FT: Uses only third-person camera (and force-torque) but finetunes the pretrained model.Metrics: We use task success as the evaluation metric and report mean success over all tasks.During training, we evaluate the policy every 4 epochs and report average over top-5 mean successrates across all evaluation epochs. For task generalization ( Q3) we evaluate the train policy onnovel visual-semantic tasks not seen during training. For all evaluations we use 20 rollouts per task.Further training details are provided in Appendix B.1.5 Experimental ResultsFirst, we evaluate the effectiveness of our multi-resolution approach against common multi-taskbaselines, RT-1[1] and BC-Zero[2]. We then present results for each research question. For qualita-tive results see: https://sites.google.com/view/multi-res-real-time-control .5.1 Comparison to Multi-Task BaselinesTable 1 shows the results for the multi-task baselines RT-1[1] and BC-Zero[2] across all taskWe note that for coarse manipulation tasks (MT-Coarse) these baselines, that use single cam-era views, can perform quite well. This is because these tasks only require coarse local-ization of the target object for task completion. However, for precise manipulation tasks(MT-Precise), such baselines perform quite poorly since these tasks require fine-grained grasp-ing (as many objects are ≈1cm in size) and insertion for successful task completion.6π−Ihπ−I3π−FT OursMT-Coarse 74.5 41.0 81.8 82.0MT-Precise 7.7 29.6 56.1 55.0MT-Dynamic 65.8 27.5 33.2 73.6Table 2: Results for multi-spatial resolution experi-ments (Section 5.2). Here, −implies that we removethis input from policy. Thus, π−Ihimplies that thepolicy only operates on third-person camera viewsand force-torque feedback.πlow-res πhigh-res OursMT-Coarse 82.0 81.0 82.0MT-Precise 53.4 56.2 55.0MT-Dynamic 4.2 12.2 73.6Table 3: Results for multi-temporal resolution ex-periments (Section 5.2). Here, both πlow-res andπhigh-res are single-resolution approaches which runat 5 Hz and 20 Hz respectively, while ours is amulti-resolution approach.πI3-Frozen πI3-FT πmulti-res-FT OursMT-Coarse (Visual) 74.5 / 7.1 81.8 / 25.8 82.4 / 45.6 82.0 / 72.3MT-Coarse (Geometry) 44.2 / 16.8 56.4 / 18.4 60.7 / 31.9 58.9 / 44.6MT-Precise (Visual) 7.7 / 4.5 15.6 / 9.2 56.4 / 31.9 55.0 / 48.1Table 4: Robustness experiment results, each cell shows train /heldout success rate (Section 5.2)MT-Coarse MT-Precise MT-DynamicRT-1 81.0 12.5 4.5BC-Z 74.1 7.8 4.8Ours 82.0 55.0 73.6Table 1: Task success comparison for multi-taskbaselines across all task domains.domains. On the other hand, our multi-resolutionapproach, performs much better as it uses the first-person camera view and force-feedback for finergrasping and insertion. For dynamic tasks (MT-Dynamic), our method considerably outperformsthe baselines ( 1.5x). This is because dynamictasks require reactive response to contact events.Only our multi-temporal resolution approach uti-lizes high spatial and temporal resolution sensing, enabling fast response to contact events.5.2 Additional Baseline ComparisonsQ1 – Spatial Resolution Experiments: We now compare against the spatial resolution baselinesdiscussed in Section 4.2. For this set of baselines all methods use multi- temporal resolution sensingwith high-frequency force-torque feedback. Table 2 shows results across all task settings. For MT-Coarse we see that only using a first-person camera ( π−I3) performs poorly. This is because ofpartial observability in this view, i.e., the target object can be out of view and lead to task failure. Onthe other hand, for MT-Precise (Row 2), only using first-person camera ( π−I3) performs better ( ≈2×) than using only the third-person camera ( π−Ih). This is because MT-Precise tasks require finermotions which are hard to perform from low spatial resolution (third-person) view only. Further, fordynamic tasks (Row 3), using first-person views alone again suffers because of partial observability.Q2 – Temporal Resolution Experiments: Table 3 compares against single-temporal resolutionbaselines ( πlow-res andπhigh-res ). Table 2 shows that for coarse and precise domains single-resolutionperform as well as our multi-resolution approach. This is because tasks in both domains are quasi-static and hence fast reaction to contact events is not critical for task success. On the other hand,for dynamic tasks (Table 2 bottom row), since fast response to contact events is necessary (to avoidfailures such as object toppling, see Figure 7 Appendix) our multi-resolution approach performsbetter than both πlow-res (5Hz) and πhigh-res (20Hz) since it incorporates force feedback at 75Hz.Q3 – Robustness Experiments: Table 4 compares results (train / heldout) for visual-semanticgeneralization against the robustness baselines in Section 4.2. As noted previously, for these exper-iments we evaluate the trained policies on heldout environments (see Appendix B.1 for details). Wenote that our approach, with frozen pretrained model, generalizes better than the finetuned modelπmulti-res-FT . This shows the ability of our approach to maintain the generalization capabilities of thepretrained VLM as compared to the finetuned model that suffers from ’forgetting’ and representationdrift towards the training tasks. Additionally, from column-1 and column-2, we again note that thefinetuned πI3-FT model suffers a larger decrease in performance as compared to πI3-Frozen . Finally,comparing πI3-FT against πmulti-res-FT , we see that even with finetuning our multi-spatial resolutionapproach generalizes better because it can utilize first-person views for improved task success.7020406080100TrainHeldoutTask SuccessNo AugmentationNo Cross-AttentionNo Pre-Trained VLMOurs020406080100Full FinetuningOursTask Success (a)Ablation results (c) Robustness results (Q3)for real-world Pickup020406080TrainHeldout(b) Ablation results usingpre-trained CLIP model Figure 5: Left: Ablation results (see Section 5.3). Right: Robustness result for real-world pickup.Real-World Experiments: We evaluate our approach in the real-world on two tasks, pickup andpeg-insertion [62]. Table 5 shows comparison against the spatial resolution baselines. We note thatour approach, with multi-spatial resolution, performs ≈3×better than the baselines on both tasks.π−Ih π−I3 π−FT OursPickup 7.5 (3.5) 20.0 (14.1) 67.5 (3.5) 75.0 (7.0)Peg-Insert 10.0 (0.0) 12.5 (4.6) 42.5 (3.5) 67.5 (3.5)Table 5: Mean (stdev) results (using 2 seeds) for multi-spatial resolution for real world tasks.We see that given limited demonstrations bothπ−I3andπ−Ihfail to perform well (across bothtasks). On the other hand, removing force-torque feedback π−Ihonly affects performanceon insertion task ( ≈20% less) since this taskrelies more on contact feedback. Additionally,Figure 5 (c) figure plots the robustness resultfor pickup task. As before we see that our ap-proach with frozen model performs better. See website for qualitative results.5.3 AblationsWe further ablate the different components of our proposed approach. Due to space limitations weonly summarize key findings and provide details in Appendix C.2.Pixel-Level Augmentations: For pixel-level augmentations (Figure 5 (a) blue bar) we see littledifference in training performance but larger increase in generalization performance ≈15%.Spatial Sensor Fusion using Cross-Attention: Figure 5 (a) (green bar) shows that using concate-nation instead of cross-attention reduces performance ( ≈10%) on both train & heldout tasks.Effect of Pretraining: We also evaluate the effects of using pretrained-VLMs. Figure 5 (a) (yellowbar) shows that not using a pretrained model (no vision-language grounding) suffers little drop intrain performance but significant drop ( 3×worse) in generalization, i.e. heldout performance.6 Conclusion and LimitationsOur work proposes using sensing modalities at multiple spatial and temporal resolutions for learningmulti-task manipulation policies. Our multi-resolution approach captures information at multiple hi-erarchies and allows the robot to perform both coarse and fine motions with high reactivity and real-time control. To learn generalizable multi-task policies we further leverage off-the-shelf pretrainedvision-language models and freeze them to maintain their robustness. Our work has several limi-tations. While our proposed framework is general for multi-spatial sensing we only rely on globalthird-person camera and local first-person camera view. Further local sensing using vibro-tactilesensors [63, 64, 65] was not explored. Further, it is unclear if our approach of using cross-attentionfor sensor fusion will be optimal for more than 2 sensors. Additionally, while our multi-resolutionpolicy allows us to learn robust policies not all sensing modalities will be available for all tasks.Thus, future work should explore adapting to scenarios with missing sensing modalities.8AcknowledgementsThis project was supported by NSF Grants No. CMMI-1925130 and IIS-1956163 ONR Grant No.N00014-18-1-2775, ARL grant W911NF-18-2-0218 as part of the A2I2 program.References[1] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[2] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2022.[3] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.[4] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International Conference on Machine Learning , pages 8748–8763. PMLR, 2021.[5] J. Li, R. Selvaraju, A. Gotmare, S. Joty, C. Xiong, and S. C. H. Hoi. Align before fuse:Vision and language representation learning with momentum distillation. Advances in neuralinformation processing systems , 34:9694–9705, 2021.[6] J.-B. Alayrac, J. Donahue, P. Luc, A. Miech, I. Barr, Y . Hasson, K. Lenc, A. Mensch, K. Mil-lican, M. Reynolds, et al. Flamingo: a visual language model for few-shot learning. arXivpreprint arXiv:2204.14198 , 2022.[7] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, Q. Le, Y .-H. Sung, Z. Li, and T. Duerig.Scaling up visual and vision-language representation learning with noisy text supervision. InInternational Conference on Machine Learning , pages 4904–4916. PMLR, 2021.[8] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[9] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. arXiv preprint arXiv:2209.05451 , 2022.[10] M. Sharma, C. Fantacci, Y . Zhou, S. Koppula, N. Heess, J. Scholz, and Y . Aytar. Losslessadaptation of pretrained vision models for robotic manipulation. In The Eleventh InternationalConference on Learning Representations .[11] M. Wortsman, G. Ilharco, J. W. Kim, M. Li, S. Kornblith, R. Roelofs, R. G. Lopes, H. Ha-jishirzi, A. Farhadi, H. Namkoong, et al. Robust fine-tuning of zero-shot models. In Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages7959–7971, 2022.[12] S. Tellex, T. Kollar, S. Dickerson, M. Walter, A. Banerjee, S. Teller, and N. Roy. Understandingnatural language commands for robotic navigation and mobile manipulation. In Proceedingsof the AAAI Conference on Artificial Intelligence , volume 25, pages 1507–1514, 2011.[13] M. R. Walter, S. M. Hemachandra, B. S. Homberg, S. Tellex, and S. Teller. Learning semanticmaps from natural language descriptions. Robotics: Science and Systems, 2013.[14] C. Matuszek, L. Bo, L. Zettlemoyer, and D. Fox. Learning from unscripted deictic gesture andlanguage for human-robot interactions. In Proceedings of the AAAI Conference on ArtificialIntelligence , volume 28, 2014.9[15] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models. arXivpreprint arXiv:2209.11302 , 2022.[16] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,M. Yan, and A. Zeng. Do as i can and not as i say: Grounding language in robotic affordances.InarXiv preprint arXiv:2204.01691 , 2022.[17] S. Y . Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Clip on wheels: Zero-shotobject navigation as object localization and exploration. arXiv preprint arXiv:2203.10421 ,2022.[18] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 ,2022.[19] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[20] A. Singh, R. Hu, V . Goswami, G. Couairon, W. Galuba, M. Rohrbach, and D. Kiela. Flava: Afoundational language and vision alignment model. In Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition , pages 15638–15650, 2022.[21] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. arXiv preprint arXiv:2203.12601 , 2022.[22] S. Parisi, A. Rajeswaran, S. Purushwalkam, and A. Gupta. The unsurprising effectiveness ofpre-trained vision models for control. arXiv preprint arXiv:2203.03580 , 2022.[23] K. Zheng, X. Chen, O. C. Jenkins, and X. E. Wang. Vlmbench: A compositional benchmarkfor vision-and-language manipulation. arXiv preprint arXiv:2206.08522 , 2022.[24] T. Xiao, H. Chan, P. Sermanet, A. Wahid, A. Brohan, K. Hausman, S. Levine, and J. Tompson.Robotic skill acquisition via instruction augmentation with vision-language models. arXivpreprint arXiv:2211.11736 , 2022.[25] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich,F. Xia, C. Finn, and K. Hausman. Open-world object manipulation using pre-trained vision-language model. In arXiv preprint , 2023.[26] O. Mees, L. Hermann, E. Rosete-Beas, and W. Burgard. Calvin: A benchmark for language-conditioned policy learning for long-horizon robot manipulation tasks. IEEE Robotics andAutomation Letters , 7(3):7327–7334, 2022.[27] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[28] M. Sharma, C. Fantacci, Y . Zhou, S. Koppula, N. Heess, J. Scholz, and Y . Aytar. Lossless adap-tation of pretrained vision models for robotic manipulation. arXiv preprint arXiv:2304.06600 ,2023.[29] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research , 17(1):1334–1373, 2016.10[30] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 , 2018.[31] A. X. Lee, C. Devin, Y . Zhou, T. Lampe, K. Bousmalis, J. T. Springenberg, A. Byravan,A. Abdolmaleki, N. Gileadi, D. Khosid, C. Fantacci, J. E. Chen, A. Raju, R. Jeong, M. Neunert,A. Laurens, S. Saliceti, F. Casarini, M. Riedmiller, R. Hadsell, and F. Nori. Beyond pick-and-place: Tackling robotic stacking of diverse shapes. In Conference on Robot Learning (CoRL) ,2021. URL https://openreview.net/forum?id=U0Q8CrtBJxJ .[32] B. H. Yoshimi and P. K. Allen. Active, uncalibrated visual servoing. In Proceedings of the 1994IEEE International Conference on Robotics and Automation , pages 156–161. IEEE, 1994.[33] D. Kragic, H. I. Christensen, et al. Survey on visual servoing for manipulation. ComputationalVision and Active Perception Laboratory, Fiskartorpsv , 15:2002, 2002.[34] B. J. Nelson, J. D. Morrow, and P. K. Khosla. Improved force control through visual servoing.InProceedings of 1995 American Control Conference-ACC’95 , volume 1, pages 380–386.IEEE, 1995.[35] M. Vecerik, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Roth ̈orl, T. Lampe,and M. Riedmiller. Leveraging demonstrations for deep reinforcement learning on roboticsproblems with sparse rewards. arXiv preprint arXiv:1707.08817 , 2017.[36] A. S. Morgan, B. Wen, J. Liang, A. Boularias, A. M. Dollar, and K. Bekris. Vision-driven compliant manipulation for reliable, high-precision assembly tasks. arXiv preprintarXiv:2106.14070 , 2021.[37] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.In2021 IEEE international conference on robotics and automation (ICRA) , pages 4613–4619.IEEE, 2021.[38] O. Spector and D. Di Castro. Insertionnet-a scalable solution for insertion. IEEE Robotics andAutomation Letters , 6(3):5509–5516, 2021.[39] M. A. Lee, Y . Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg.Making sense of vision and touch: Self-supervised learning of multimodal representations forcontact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA) ,pages 8943–8950. IEEE, 2019.[40] N. Fazeli, M. Oller, J. Wu, Z. Wu, J. B. Tenenbaum, and A. Rodriguez. See, feel, act: Hierar-chical learning for complex manipulation skills with multisensory fusion. Science Robotics , 4(26):eaav3123, 2019.[41] R. Calandra, A. Owens, D. Jayaraman, J. Lin, W. Yuan, J. Malik, E. H. Adelson, and S. Levine.More than a feeling: Learning to grasp and regrasp using vision and touch. IEEE Robotics andAutomation Letters , 3(4):3300–3307, 2018.[42] Q. Li, O. Kroemer, Z. Su, F. F. Veiga, M. Kaboli, and H. J. Ritter. A review of tactile informa-tion: Perception and action through touch. IEEE Transactions on Robotics , 36(6):1619–1634,2020.[43] T. Narita and O. Kroemer. Policy blending and recombination for multimodal contact-richtasks. IEEE Robotics and Automation Letters , 6(2):2721–2728, 2021.[44] S. Singh, F. M. Ramirez, J. Varley, A. Zeng, and V . Sindhwani. Multiscale sensor fusion andcontinuous control with neural cdes. In 2022 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 10897–10904. IEEE, 2022.11[45] K.-i. Funahashi and Y . Nakamura. Approximation of dynamical systems by continuous timerecurrent neural networks. Neural networks , 6(6):801–806, 1993.[46] P. Kidger, J. Morrill, J. Foster, and T. Lyons. Neural controlled differential equations forirregular time series. Advances in Neural Information Processing Systems , 33:6696–6707,2020.[47] J. Shi, J. Z. Woodruff, P. B. Umbanhowar, and K. M. Lynch. Dynamic in-hand sliding manip-ulation. IEEE Transactions on Robotics , 33(4):778–795, 2017.[48] C. Mucchiani and M. Yim. Dynamic grasping for object picking using passive zero-dof end-effectors. IEEE Robotics and Automation Letters , 6(2):3089–3096, 2021.[49] S. Saxena, A. LaGrassa, and O. Kroemer. Learning reactive and predictive differentiable con-trollers for switching linear dynamical models. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 7563–7569. IEEE, 2021.[50] R. Shu and R. Hollis. Momentum based whole-body optimal planning for a single-spherical-wheeled balancing mobile manipulator. In 2021 IEEE/RSJ International Conference on Intel-ligent Robots and Systems (IROS) , pages 3221–3226. IEEE, 2021.[51] A. Kamath, M. Singh, Y . LeCun, G. Synnaeve, I. Misra, and N. Carion. Mdetr-modulateddetection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision , pages 1780–1790, 2021.[52] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko. End-to-end ob-ject detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference,Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 , pages 213–229. Springer, 2020.[53] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Pro-ceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778,2016.[54] E. Perez, F. Strub, H. De Vries, V . Dumoulin, and A. Courville. Film: Visual reasoning with ageneral conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 32, 2018.[55] M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas. Reinforcement learningwith augmented data. Advances in neural information processing systems , 33:19884–19895,2020.[56] N. Somavarapu, C.-Y . Ma, and Z. Kira. Frustratingly simple domain generalization via imagestylization. arXiv preprint arXiv:2006.11207 , 2020.[57] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[58] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[59] U. Nagarajan, G. Kantor, and R. Hollis. The ballbot: An omnidirectional balancing mobilerobot. The International Journal of Robotics Research , 33(6):917–930, 2014.[60] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onrobot learning , pages 1094–1100. PMLR, 2020.[61] Y . Zhou, S. Sonawani, M. Phielipp, S. Stepputtis, and H. B. Amor. Modularity through atten-tion: Efficient training and transfer of language-conditioned policies for robot manipulation.arXiv preprint arXiv:2212.04573 , 2022.12[62] K. Zhang, M. Sharma, J. Liang, and O. Kroemer. A modular robotic arm control stack forresearch: Franka-interface and frankapy. arXiv preprint arXiv:2011.02398 , 2020.[63] W. Yuan, S. Dong, and E. H. Adelson. Gelsight: High-resolution robot tactile sensors forestimating geometry and force. Sensors , 17(12):2762, 2017.[64] A. Yamaguchi and C. G. Atkeson. Combining finger vision and optical tactile sensing: Re-ducing and handling errors while cutting vegetables. In 2016 IEEE-RAS 16th InternationalConference on Humanoid Robots (Humanoids) , pages 1045–1051. IEEE, 2016.[65] K. Zhang, M. Sharma, M. Veloso, and O. Kroemer. Leveraging multimodal haptic sensorydata for robust cutting. In 2019 IEEE-RAS 19th International Conference on Humanoid Robots(Humanoids) , pages 409–416. IEEE, 2019.[66] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh,and V . Vanhoucke. Google scanned objects: A high-quality dataset of 3d scanned householditems. In 2022 International Conference on Robotics and Automation (ICRA) , pages 2553–2560. IEEE, 2022.[67] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033. IEEE, 2012. doi:10.1109/IROS.2012.6386109.[68] E. Coumans and Y . Bai. Pybullet, a python module for physics simulation for games, roboticsand machine learning. http://pybullet.org , 2016–2022.[69] T. L ̈uddecke and A. Ecker. Image segmentation using text and image prompts. In Proceedingsof the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7086–7096,2022.13A Environment DetailsIn this section we provide further details on the different environments used in our experiments.A.1 MT-Coarse ManipulationFor coarse manipulation tasks we focus on a variety of objects including blocks, mugs, cups, andshoes (both men and women shoes). As noted in the main paper, for these set of objects we focuson pick-and-place skills. However, we note that we did experiment with more complex contact-richskills (e.g. pushing, stacking). However, we found the physics to be unstable with more complexobjects (e.g. cups). For instance, pushing cups would almost always topple them and roll over. Forfuture work, we hope to make our skills more robust.Specifically, we use fixed size blocks with different semantic colors, 4 mugs, 4 cups and 4 shoes.We use google scanned objects [66] to collect non-block objects and use mujoco [67] to simulateour environment. We use the latest mujoco environments to import meshes into the simulator. Eachenvironment in this set of tasks is created by first selecting a target object-type and then selecting atarget object from the set of objects. We then select 3-5 distractor objects to fill the scene. Theseobjects are uniformly selected from the remaining objects.A.2 MT-Precise ManipulationAs noted in the main paper for precise manipulation tasks we use the spatial precision set of tasksfrom RLBench [58]. Overall, we use 4 tasks (see Figure 3 (Left)) – square block insertion, pickup small objects, shape sorting, and unplug usb from computer. We avoid using the motion-planneraugmented approach for solving these tasks and instead opt for learning reactive closed-loop controlpolicies. We use the delta end-effector actions for our tasks. Additionally, we use standard front andwrist mounted camera. along with proprioceptive and force-torque feedback as policy input.However, directly using end-effector actions increases the policy horizon significantly. Moreover,naively using the original input distribution for each task also requires learning full 6-DOF policies.Both of these can significantly increase the data requirements to learn the manipulation policy. Toavoid this we restrict the starting distributions for each task such that the objects are spawned in aslightly narrow region infront of the robot. We further make other task-specific changes, detailedbelow, such that the robot can perform each task without changing hand orientations.Insert Onto Square Peg: For this task we restrict the orientations of the square ring (blue object)and the peg on which to insert. This allows the robot to perform the task without changing gripperorientations. Further, we use a region of 40cm×30cminfront of the robot to spawn both the baseand ring. Finally, the default task configuration provides 20 different peg colors, of which we usethe first 10 colors for training and remaining 10 colors for robustness experiments.Pick and Lift Small: For this task, we again use a region of 40cm×30cminfront of the robot tospawn both all objects. We also restrict the orientation of each object such that it can be graspeddirectly without requireing gripper orientation changes.Shape-Sorting: The default configuration for the shape-sorting task considers 4 different shapedobjects (see Figure 3 Bottom-Left) – square, cylinder, triangle, star, moon. In the default RLBenchconfiguration most objects directly stick to the robot finger and are simply dropped into the hole fortask completion. However, with closed loop control we find that non-symmetric objects (star, trian-gle, and moon) can have significant post-grasp displacement such that it is impossible to insert theseobjects without changing gripper orientation. Hence, we exclude these two objects from evaluationand only use symmetric square and cylinder objects.Take USB Out: This task requires the robot to unplug a USB inserted into the computer. However,the default configuration for this task requires 6-dof control. To avoid this, we create smaller com-puter and USB assets and mount them vertically on the table such that the USB can be unpluggedwithout changing hand orientation. See Figure 3 (Bottom-Right) for visualization.14Train setTest set(A) Real-world setup for pickup and insertion tasks.(B) Examples objects for real-world pickup task(C) Example objectsforcoarse manipulation taskFigure 6: Left: Real World env setup with third-person (red) and first-person (blue) camera views.Middle: Example objects set used for real-world pickup task. Right: Example objects used for MT-coarse.A.3 MT-Dynamic ManipulationThis task involves using the CMU Ballbot in simulation (PyBullet [68]) to perform a dynamic pickup task. The task involves picking up a block that is placed on a table in front of the ballbot. We usetwo blocks (red and blue) in this task and use language instructions to specify which object to pickup. The initial conditions are set such that the table and objects are always out of the reach of theballbot arms and the ballbot has to roll forward to pick up the objects. We use a statically mountedcamera looking at the table and the ballbot as the third-person camera and the camera located onthe turret of the ballbot as the first-person camera. The turret tilt is adjusted such that the objects onthe table are initially out of the view of the turret camera and only when the ballbot starts movingtowards the table, the objects come into view. The third person camera is always able to view boththe objects and the ballbot. We use task space control to control the ballbot end-effector while acenter of mass balancing controller is always running in a high-frequency feedback loop to balancethe ballbot.B Architecture DetailsSection 3 discusses the overall architecture used in our work. To recall, our proposed architectureuses a multi-resolution approach with multiple-sensors, each with different fidelity. We processeach sensor with a separate network which is conditionally initialized using a pre-trained vision-language model. The output of each vision model is flattened to create a set of patches. For DETR[51, 52] based model we use a ResNet-101 backbone and flatten the output layer into 49 patches andadd positional embedding to it. For CLIP [4] we use a ViT-B model and use hierarchical featuresfrom the 5’th, 8’th and 11’th layer. Since MDETR already does vision-language fusion using atransformer we directly use its output. However, since CLIP only weakly associates vision andlanguage at the last layer, we additionally use FiLM layers to condition the output. Our use ofFiLM is similar to previous models [69]. For each camera modality we use a small transformerwith multi-head attention. Each transformer uses an embedding size of 256 and 8 heads. We usepost layer-norm in each transformer layer. Further, in each transformer layer we use cross-attentionwith the other camera. Overall we use 3 transformer layers for each camera modality. Our force-torque and proprioceptive input is concatenated together and mapped into 256 dimensions using alinear layer. We concatenate the readout tokens from each camera transformer and the force-torqueembedding. This 256×3size embedding is then processed by 2 linear layers of size 512 whichoutput the robot action.Input: For each of our camera sensor we use an image of size 224×224. For proprioceptive inputwe use the end-effector position of the arm. While for force-torque input we use the 6 dimensionalforce-torque data. We use cropping augmentation for both camera sensors. Specifically, we firstresize the image to 226 and then do random crop with shift = 8. For, more aggressive pixel level15Key Valuebatch size 16proprio and force torque embedding 256camera-transformer embedding Dim. 256camera-transformer feedForward Dim. 768Number of transformer layers 3learning rate 0.0001warmup epochs 5total epochs 60optimizer AdamWweight decay 0.01scheduler cosineTable 6: Hyperparameters used for our architecture and model training.augmentations we stochastically apply grayscale and use color jitter with brightness ∈(0.4,0.8),contrast ∈(0.4,0.8), saturation ∈(0.4,0.6)and hue ∈(0.0,0.5). These augmentations significantlychange the underlying visual semantics of the task.B.1 Training DetailsIn this section we provide details on the demonstrations (for each environment type) used to trainour approach. Further, we also provide details on the train and heldout configurations used forrobustness evaluation.MT-Coarse: As noted above in Appendix A.1, we use multiple different objects to train and evaluateour policy. Each environment is created by first sampling a target object and then a set of distractorobjects. For each environment and skill combination we collect 20 demonstrations. Overall, thisgives us ≈1000 demonstrations across all tasks. We then learn one policy across all tasks.MT-Precise: For spatial precision tasks from tasks from RLBench [58] we use 4 different tasks.As discussed in Section A.2, each task has it’s own set of variations. For training our multi-taskpolicy we use try to balance the number of demonstrations from each task. For square peg insertion(insert onto square peg ) task we use first 10 variations for training and gather 25 trajectories pervariation. Each other task has less than 4 variations hence for each task we use 100 demonstrationseach for training. To test visual-semantic robustness for these tasks Section 5.2 we use the insert-onto-square-peg task since only this task has any semantic variations. We use the remaining 10 pegcolors (i.e. 10 heldout variations) to test each approach.MT-Dynamic: To collect expert demonstrations, we sample the locations of the objects on the tablein a 70cm*20cm region and sample the initial ballbot location in a 50cm*50cm region. We collect50 demonstrations for each task setting (each block). As noted earlier, the third-person camera isused at a frequency of 5Hz, the turret camera is used at 20Hz and proprioception and force-torquefeedback is used at 75Hz.Real-World: For real-world tasks we collect data using teleoperation with a leap-motion devicewhich can track hand movements upto a 100Hz. We map these movements to robot movementsand collect proprioceptive and force-torque data at 75Hz, while both cameras are recorded at 30Hz.To collect data for pickup tasks we use two blocks with different shapes and different colors. Thegreen and pink blocks in Figure 6 (Right) were used to collect all training data. While evaluationhappened on 8 other blocks, each with a different shape and color. For training our policies wecollect 60 demonstrations for each pickup variation and 50 demos for the insertion task. We notethat the initial state distribution for insertion was narrower than pickup and hence it required fewerdemonstrations.Metrics: We use task success as the evaluation metric. Since we use a multi-task setting we reportmean success over all tasks. During training, we evaluate the policy every 4 epochs on all traintasks. We report the average over top-5 mean success rates across all evaluation epochs. For taskgeneralization results ( Q3) we use the trained policy and evaluate it on novel visual-semantic tasks16Figure 7: Example failure case for MT-Dynamic (Ballbot) task. As can be seen in the figure, if therobot approaches the object but does not react fast enough to the object contact, the block can toppleresulting, in task failure.which were never seen during training. Hence, for Q3 we report task success on novel unseen tasks.For all evaluations we use 20 rollouts per task. Further training details are provided in Appendix B.1.B.2 Implementation DetailsIn this section, we discuss our real-robot implementation details. In our implementation, the real-time control loop is composed of a low-level task-space impedance controller and a high-level neuralnetwork controller. The low-level controller operates at 1KHz using a real-time kernel and sendscontrol commands to Franka Panda’s control interface (FCI) [62]. Our neural-network controllerimplementation can operate up to a maximum of 100Hz given communication latency. Specifically,for our experiments we run the neural network controller at 75 Hz. We use fixed low impedancevalues (Kp: 350) to avoid damaging the robot during fast execution of contact-rich tasks.Neural network controller implementation: For our real-robot neural-network controller imple-mentation we follow a multi-threaded architecture. Robot state information such as proprioceptivedata and force-torque data is published at 100Hz, while camera images are published at 30Hz. Eachsensor modality is appended to a separate fixed size time-stamped buffer. We process each modal-ity independently in a multi-threaded manner by extracting the latest time-stamped data from therespective buffer.Camera images are processed on separate threads using their respective neural networks and we savethe network outputs for future processing. More specifically, we process images from third-personcamera using a large VLM and save a set of visual-language representations from its output in abuffer. This thread is limited by the inference speed of the large VLMs and operates at 5Hz. Weprocess the image from the in-hand camera in a separate thread using a small ResNet based model toget hand-camera image representations. On the same thread, we further process these hand-cameraimage representations with the existing cached vision-language representations using cross-attentionlayers to get multi-modal fused visual-language output which is added to a fixed size buffer. Thisthread operates at 20Hz.Finally, the high-level neural network controller (which runs on the main thread at 75Hz) concate-nates the cached robot state information (force-torque, proprioceptive) with the latest fused multi-modal features. The concatenated features are processed through a small multi-layer perceptron toget the final action output which is sent to the low-level impedance controller.C Additional ResultsC.1 Additional Real-World ComparisonsIn addition to real-world results in Table 5 we also tried out BC-Z and RT-1 on the pickup task in thereal world. Table 7 reports the average success rate and compares them to our method. We find thatBC-Z’s performance is much worse than our proposed approach. This is because BC-Z operates ata single-resolution (both spatial and temporal) as it uses only a third-person camera. In the absence17Setup BC-Z [2] RT-1 [1] OursTrain 12.5 0.0 75.0Eval 5.0 0.0 71.1Table 7: Real-World results for usingcommonly used imitation learning (single-spatial resolution baselines) for Pickup task.πlow-res πhigh-res OursRealWorld - PegInsert 45.0 62.5 67.5Table 8: Additional Results for multi-temporalresolution experiments. As before, both πlow-res andπhigh-res are single-resolution approaches which runat 5 Hz and 20 Hz respectively, while ours is amulti-resolution approach.of a first-person camera view it is often unable to accurately localize the target object and fails toperform the final fine-grained motion to grasp the object and lift it up. Further, for RT-1 we find theperformance to be very poor. We believe this is because RT-1 uses tokenized actions which requiresus to discretize our continuous robot actions. Since we operate in the low data regime (120 trajec-tories) such discretization leads to token imbalances during training and deteriorates the model’sperformance. Additionally, since RT-1, similar to BC-Z, uses single-resolution (i.e. third-personcamera only) we believe its performance suffers from similar challenges of inaccurate localization.Furthermore, we evaluate visual generalization of the BC-Z and RT-1 policy on novel unseen objects(and instructions). Since both BC-Z and RT-1 do not use a pre-trained vision-language model andthus have no visual grounding for the text instructions they fail to perform well on unseen novelobjects. By contrast, our approach that utilizes a pretrained VLM generalizes well.C.2 Additional AblationsWe further ablation on the different components of our proposed approach. For these set of resultsinstead of using all 3 environment suites for evaluation, we choose the most appropriate environmentsuite for each component of our approach and evaluate on it.Pixel-Level Augmentations: We evaluate the effect of pixel-level augmentations (color jitter, gray-scale) on the training and generalization of our MT-policies on MT-Coarse. Figure 5 reports resultson both training and heldout (novel) evaluation configurations. We see that while there is verylittle difference in training performance, extensive pixel-level augmentations helps generalizationby close to ≈15%. While pixel-level augmentations change the semantics of the task, our multi-modal approach is still able to complete the task because of visual-language grounded provided frompretraining.Multi-Modal Fusion using Cross-Attention: We compare use of early fusion using cross-attentionwith late fusion using concatenation. Figure 5 shows that using cross-attention improves the per-formance by around ≈8%on both train and heldout configuration. Thus, using cross-attentionfor multi-modal fusion is more effective than concatenation. However, we note that cross-attentionrequires more parameters and has slower inference.Effect of Pretrained-VLMs: We also evaluate the effects of using pretrained-VLMs. Figure 5shows the training and heldout performance using ImageNet initialization which only has visualpretraining and no vision-language pretraining. We see that while training performance matches ourapproach the heldout performance decreases tremendously. This large decrease is due to missingvisual-language grounding since we use separately trained visual and language models.Real-World Temporal-Resolution Comparison: We also ablate the effect of temporal resolutionson real-word robot performance. Specifically, we evaluate single temporal-resolution approaches(πlow-res ) and πhigh-res for the peg-insertion task in the real-world. As before, to evaluate the learnedpolicy we run each episode for a fixed duration of 60 seconds. However, we use early terminationif the episode is solved successfully or the robot violates the desired workspace. Table 8 shows ourresults. Given that the insertion task is not dynamic, πhigh-res performs similarly to our approach.However, by comparison, ( πlow-res ) performs much more poorly (45% only). This is because a low-temporal resolution policy is not very reactive and hence doesn’t respond fast to contacts made withthe wooden peg. Thus, it is often unable to find the appropriate location to insert the block into the18wooden peg. This can also be seen from qualitative videos (see success and failure videos), whereboth success and failure scenarios are much less reactive.Temporal-Resolutions: Finally, we also ablate the temporal frequencies for the MT-Dynamic tasks.We ablate the effect of using camera inputs at low-resolution (third-person and in-hand camera inputsat 5Hz) while only force-torque feedback is used at high-resolution (75Hz).πlow-res-high-FT Ours33.4 73.6Table 9: Results for usinglow-temporal resolutions forcamera-inputs (5Hz) and high-temporal resolutions for force-torque only (75Hz).Table 9 below shows our results. From the table below, we observethat the performance on MT-Dynamic tasks drops significantly whenusing the camera views at a very low temporal resolution. From ourqualitative observations we note two common failure cases. First,where the ballbot is sometimes unable to reach the block to pick up.This is because, due to latency in the camera inputs (5 Hz), the policyoutputs sub-optimal actions. Upon receiving updated camera inputsthe policy tries to correct the trajectory. The overall resulting trajec-tory is noisy and fails to reach the target object. Second, again dueto camera latency, the end effector does not align well with the targetobject and ends up toppling the object while trying to grasp it.19 |
8asqEWO479I | Push Past Green: Learning to Look BehindPlant Foliage by Moving ItXiaoyu ZhangUniversity of Illinois at Urbana-Champaignzhang401@illinois.eduSaurabh GuptaUniversity of Illinois at Urbana-Champaignsaurabhg@illinois.eduAbstract: Autonomous agriculture applications ( e.g., inspection, phenotyping,plucking fruits) require manipulating the plant foliage to look behind the leavesand the branches. Partial visibility, extreme clutter, thin structures, and unknowngeometry and dynamics for plants make such manipulation challenging. We tacklethese challenges through data-driven methods. We use self-supervision to trainSRPNet, a neural network that predicts what space is revealed on execution of acandidate action on a given plant. We use SRPNet with the cross-entropy methodto predict actions that are effective at revealing space beneath plant foliage. Fur-thermore, as SRPNet does not just predict how much space is revealed but alsowhere it is revealed, we can execute a sequence of actions that incrementally re-veal more and more space beneath the plant foliage. We experiment with a syn-thetic (vines) and a real plant (Dracaena) on a physical test-bed across 5 settingsincluding 2 settings that test generalization to novel plant configurations. Our ex-periments reveal the effectiveness of our overall method, PPG, over a competitivehand-crafted exploration method, and the effectiveness of SRPNet over a hand-crafted dynamics model and relevant ablations. Project website with executionvideos, code, data, and models: https://sites.google.com/view/pushpastgreen/.Keywords: Deformable Object Manipulation, Model-building, Self-supervision1 IntroductionThe ability to autonomously manipulate plants is crucial in the pursuit of sustainable agriculturalpractices [1, 2, 3, 4]. Central to autonomous plant manipulation is the plant self-occlusion prob-lem. Plants self-occlude themselves (Figure 1 (left)). Plant leaves and branches have to be care-fully moved aside for the simplest of agriculture problems: plant inspection, phenotyping, precisionherbicide application, or finding and plucking fruits. This papers tackles this plant self-occlusionproblem. We develop methods that learn to manipulate plants so as to look beneath their externalfoliage. Figure 1 (middle and right) shows steps from a sample execution from our method. Webelieve our work will serve as a building block that enables many different applications that requiremanipulation of plants in unstructured settings.Manipulating external plant foliage to reveal occluded space is hard. Sensing is difficult because ofdense foliage, thin structures and partial observability. Control and planning is challenging becauseof unknown dynamics of the plant leaves and branches, and the difficulty of building a full articulableplant model. These sensing and control challenges motivate the need for learning. However, use oftypical learning paradigms is also not straight-forward. Model-free RL ( e.g. PPO [5]) requiresinteraction data at a scale that is difficult to collect in the real world. Model-based RL is moresample-efficient, but is quite challenging here as precisely predicting the next observation (or state)is hard. Imitation learning is more promising; but for the exploration task we tackle, the next bestaction depends on what has already been explored. This increases the amount of demonstration datarequired to train models. Lack of high-fidelity plant simulators preclude simulated training.Our proposal is to tackle this problem through self-supervision [6, 7]. We collect a dataset of actionoutcomes (amount of space revealed) by letting the robot randomly interact with plants. We usethis data to train a model to predict space revealed by an input action. However, in order to derivea long-term strategy for exploring all of the space beneath the plant, the model has to predict not7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: (left) Plants self-occlude themselves. Two examples of leaves and branches being pushedaside for inspection and picking fruits. This paper develops learning algorithms that enable robots totackle this plant self-occlusion problem. We show actions executed by the robot to expose the spacebehind vines (middle) and Dracaena plant (right) .only how much space would get revealed, but also where (Figure 2 (b), Section 4.1). In this way, themodel output lets us reason about what additional space each action would reveal. This allowing usto execute multi-step action sequences that explore all of the area behind the plant using a simplegreedy control loop implemented via the cross-entropy method (CEM) (Figure 2 (a), Section 4.3).This paper implements and tests these ideas on a physical platform that is tasked with revealingspace behind decorative vines and a real Dracaena plant. We collect 48 hours of plant interactiondata and use it to train a neural network that we call Space-Revealed Prediction Network (SRPNet) .SRPNet, when used with CEM, leads to effective control strategies to reveal all (or user-specified)space beneath the plant foliage. We call our overall framework PushPastGreen (PPG).Experiments show that SRPNet outperforms a hand-crafted dynamics model and ablated versions ofSRPNet. In physical experiments, PPG outperforms a hand-crafted exploration strategy and versionsof PPG that replace SRPNet with alternative choices for modeling space revealed. In all 5 settingsacross vines and Dracaena, including 2 that explicitly test for generalization, we observe relativeimprovements ranging from 4% to 34% over the next best method. This establishes the benefits ofPPG and the use of learning to manipulate plants.2 Related WorkAutonomous Agriculture. Motivated by the need for adopting sustainable agricultural practices [2,3, 1], researchers have sought to introduce and expand the use of autonomy for agricultural tasks [8,9]. While a full review is beyond our scope, major trends include a) development of specializedrobotic hardware [10, 11, 12], b) development of algorithms for perception in cluttered agriculturalsettings [13, 14, 15], c) design of control algorithms for navigation [16, 17] and manipulation [18],and d) full autonomous farming systems [19, 18, 20, 21].Plant Manipulation. For manipulation oriented tasks ( e.g.fruit picking): [22] compute 3D grasppose for largely unoccluded fruits, [23] design a visual servoing approach to get partially occludedfruits into full view, [24, 25, 26] output trajectories for reaching fruits while avoiding collisionswith plant leaves and branches, and [10, 27] develop soft arms / end-effectors that can maneuveraround plant structures. Much less research actually interacts with the plant structure to accomplishtasks. [18] hand-design strategies for pushing fruits out of the way. [28] show simulated results usingprobabilistic motion primitives for pushing fruits out of the way. We instead study the task of lookingbehind plant foliage, and hand-crafted strategies proposed in [28, 18] are not directly applicable toour setting. [29, 30] tackle reaching in plants while treating leaves as permeable obstacles, while [31]develops efficient MPC to minimize contact forces when interacting with plants. [32] learn to modelobject’s resistance to movement by estimating stiffness distribution from torque measurements. Weinstead directly model the effect of actions executed on the plant.Manipulation of Deformable Objects. Past works have considered manipulation of other de-formable objects such as cloth [33, 34, 35, 36, 37], ropes [38, 39], elasto-plastics [40, 41], flu-ids [37, 42, 43], and granular media [44]. [33, 34, 38] design dynamic primitive actions to tacklecloth and rope manipulation. [45, 42, 40, 35] learn particle-based forward models for deformablematerial and use model-based RL for control. [46, 41] compose skills for deformable object manip-ulation to solve long-horizon tasks. Our study explores plant manipulation. Lack of high-fidelityplant simulators limits the applicability of past methods that rely on large amount of data in simu-lation [42, 36, 38]. At the same time, building dynamics models [40, 35] for plants is hard due todense foliage, thin branch structure, and unknown heterogeneous dynamics.2Cross Entropy Methoda) Cross Entropy Method (CEM) with SRPNet for ControlRGBHeightSpace revealed till current time step, CtEvaluate Sampled ActionsRank and Execute Best ActionUpdate space revealed Ct+1Probability of Revealing SpaceImages cropped around action start pixelb) Space Revealed Prediction Network (SRPNet)One-hot action orientationEnd pixelOne-hot z heightEncoderAction EncoderDecoderSpace revealed by actionSkip connectionsFigure 2: Overview of PushPastGreen. PushPastGreen learns to manipulate plants to reveal thespace behind them thus tackling the plant self-occlusion problem. PushPastGreen includes Space-Revealed Prediction Network (SRPNet) that predicts where space is revealed upon execution of apushing action, as shown in (b)and described in Sec. 4.1. SRPNet can not only rank actions basedon how much space they will reveal, but because it can also predict where space gets revealed, it canalso be used for executing multi-step trajectories that explore all the space behind the vines as shownin(a)and described in Sec. 4.3. SRPNet is trained using self-supervision as described in Sec. 4.2.Self-supervised Learning in Robotics. We adopt a self-supervised approach for training our mod-els. Self-supervision techniques typically predict scalar quantities ( e.g.grasp outcomes [6, 7], deltacloth coverage on workspace [33], pushing+grasping success [47], etc.). Past work has also usedself-supervision to build forward models for model-predictive control [48, 49, 50, 51] in pixel orfeature spaces. Our work finds a middle ground. We predict not just how much space is revealed(insufficient for executing a sequence of actions), but also where it is revealed. This lets us executesequences of actions that incrementally expose more and more space.3 Problem SetupFigure 3 shows the 2 different plants that we tackle, a) decorative vines vertically hanging acrossa board, and b) a real Dracaena plant. The vines involve a 2D exploration problem and presentchallenges due to entanglement, thin structures, and extensive clutter. The real Dracaena plantexhibits a large variation in scene depth leading to a 3D problem. The Dracaena plant has big leavesthat bend only in specific ways. Thus it requires careful action selection. Both test cases exhibitunknown and heterogeneous dynamics which makes it hard to manipulate them.As one can notice in Figures 1 and 3, vines occlude the surface behind the vines. Similarly, the Dra-caena leaves occlude the plant. We refer to this occlusion as the plant self-occlusion problem . Thetask is to have manipulation policies that can use the non-prehensile pushing actions (as describedbelow) to reveal the space beneath the plant surface.We use the Franka Emika robot and change the end effector to a grabber (as also done in pastwork [52, 53]). We use RGB-D cameras pointed at the plant for sensing. Our action space consistsof non-prehensile planar pushing actions (also used in past work e.g.[51]). We sample a 3D locationand push in a plane parallel to the board for the vines and to the ground for the Dracaena plant. Asvines have limited depth variation, we use a fixed zfor the vines, but actions are sampled at varyingzfor the Dracaena plant. Sections A.1 and B.1 provide more experimental details.4 Proposed Approach: Push Past GreenPPG adopts a greedy approach. We keep track of space that has not yet been revealed, and executeactions that would reveal the most newspace. Doing this requires a model that predicts what space a3View from Azure KinectView from Kinect V2Vines occluding spaceLeaves occluding spaceFigure 3: Hardware setup for vines (left) and real Dracaena plant (right). We use a grabberas the end-effector [52, 53]. View from the RGB-D camera is in the inset. The task is to move thevines and the Dracaena leaves aside to reveal the space occluded by them.candidate action would reveal. As plants are complex to model, such a model is hard to hand-craft.Furthermore, it is difficult to estimate the precise state and physical parameters for plants from asingle RGB-D image ( e.g. placement of leaves and branches with respect to one another, locationand connectivity of all the leaves with the stems, stiffness parameters). This precludes the useof physical simulation for such prediction. Thus, we design Space-Revealed Prediction Network(SRPNet), that uses learning to directly predict space revealed on execution of a given action ona given plant configuration (Section 4.1). Learning to directly make this prediction sidesteps thecomplexity of precise state estimation and physical simulation necessary to build a full dynamicsmodel for the plant. To obtain the data to train SRPNet, we adopt a self-supervised approach andexecute random actions from the robot action space. We automatically compute the space revealedafter an action using the RGB-D image (Section 4.2). Together with SRPNet, we design PPG, agreedy algorithm that uses the cross-entropy method (CEM) [54] to sample the action that promisesto reveal the most newspace on top of space already revealed (Section 4.3).4.1 Space-Revealed Prediction NetworkInput Representation. As shown in Figure 2 (b), the input to our model is a 200200patchcropped out at the action start location. We crop both the RGB image and an image denoting theheight relative to the surface beneath the vines (or relative to the ground beneath the Dracaena). Theheight image is computed using the point cloud from the RGB-D cameras. As the model sees cropsaround the site of interaction, each action starts at the center of the image. We only need to representthezcoordinate of the push start location, the push direction ( ), and the push distance ( d). Werepresent these using a) a one-hot vector depicting the push direction, b) a one-hot zheight, and c)push distance via the location of the action end-point i.e.[dcos();dsin()].Output. The model produces an output that is the same size as the input. Each value in thisspatial map represents the probability that space will get revealed at that location in the image uponexecution of input action on the input plant configuration.Model Architecture and Loss. We adopt the UNet structure [55] used in image segmentation. Theencoder has 5 convolution layers. The action features are processed through 2 transposed convo-lution layers before being concatenated with the visual features and passed to the decoder with 5transposed convolution layers. We add a skip connection between each corresponding convolutionaland transposed convolution layer. SRPNet is trained using cross-entropy loss.4.2 Data Collection and PreparationOur self-supervised data collection procedure executes random actions from the robot action space.We divide the robot’s reachable space into a grid of 2cm2cm cells. Action starting locations(x;y;z )are sampled at the centers of these cells. We sample push directions and push by 15cmclipping to the feasible space as necessary. Each interaction executes in about 30s. We record RGB-D videos and robot end-effector pose over the entire duration of the interaction. We collected 3529interactions for vines over 30 hours, and 2175 interactions for Dracaena over 18 hours. We split thedataset into train, val, and test splits in a 8:1:1 ratio and train one model for each plant.4We automatically compute ground truth for training the model on the collected data. This involvesprocessing the RGB and depth image before and after the interaction. For vines, we found a simpledecision rule using the color value and change in depth to work well. For Dracaena, very often theentire plant wobbles upon interaction, which leads to erroneous estimates. Thus, we first align thepoint clouds before and after interaction and then look for depth increase to obtain ground truth.More details are provided in Supplementary Sections A.3 and B.3.4.3 Looking Behind Leaves Using SRPNetAlgorithm 1 : PPG: Revealing space beneath plants.Require: Model fthat predicts space revealed after action1: Current revealed space, C0 space visible at start2:fort 0toT1do3: Receive images It4: at CEM (Ct; f; I t)5: Execute action at6: Calculate additional space revealed ct7: Update current revealed space: Ct+1 Ct[ct8:end forAlgorithm 1 describes our control algo-rithm that uses the trained SRPNet to pre-dict actions to reveal space behind vines.At each timestep tof the trajectory, weuse the cross-entropy method (CEM) [54]to pick out the best action to execute (line4). We maintain the revealed space so far(Ct).C0is initialized to be the space visi-ble before any actions (line 1). Action pa-rameters are sampled from Gaussian distri-butions. For each candidate action, SRPNetpredicts where space would be revealed. We determine new space revealed by subtracting the areathat has already been revealed ( Ct) from SRPNet’s output. Samples that are predicted to reveal themost new space are selected as elite which are used to fit a Gaussian distribution to sample actionsfor the next CEM iteration. After all iterations, CEM outputs at, the action found to reveal the mostnew space (line 4). Upon executing at, we observe the space that is actually revealed and update Ct(line 6 and 7). The process is repeated for the length of the trial.5 Experiments and ResultsWe test our proposed framework through a combination of offline evaluations of SRPNet on ourcollected dataset (Section 5.1), and online execution on our physical platform for the task of reveal-ing space behind plants (Section 5.2). Our experiments evaluate a) the benefit of learning to predictspace revealed by actions, b) the effectiveness of SRPNet’s input representation, and c) the qualityof SRPNet’s spatial predictions and selected actions for long-horizon and targeted exploration.5.1 Offline Evaluation of SRPNetMethods Vines [All] Vines [5cm] DracaenaFull SRPNet (Our) 46.3 54.4 44.2Input Representation AblationsNo action 30.2 43.5 28.4No height map 46.9 49.1 40.6No RGB 33.4 46.4 28.7No RGB and no height map 28.4 35.2 10.5Data Augmentation AblationsNo left/right flips 44.1 52.7 34.6No color jitter 41.0 51.4 30.7Table 1: Average precision for different models at pre-dicting space revealed. Higher is better. Our proposed in-put representation outperforms simpler alternatives and dataaugmentation boosts performance.We train and evaluate SRPNet on datagathered on our physical setup as de-scribed in Section 4.2. We measurethe average precision (AP) for the pix-els labeled as revealed-space. We trainon the train split, select model check-points on the validation set, and reportperformance on the test set in Table 1.For vines, we report performance intwo settings: Vines [All] i.e. seeingthe board (behind the vines) counts asrevealed space, and Vines [5cm] i.e.height decrease of 5cm counts as re-vealed space. For Dracaena, only see-ing past the leaf (as determined by ourautomated processing from Section 4.2) counts as revealed space.Results. Experiments presented in Table 1 reveal the effectiveness of our method and provide in-sights about the underlying data. First , across all three settings, the full model is able to extractinformation from visual observations to produce higher quality output than just basing the predic-tions on the action information alone ( Full model vs. No RGB and no height map ). This suggeststhat plant configuration (as depicted in the visual observations) is important in predicting the action5outcome. Second , use of action information leads to better predictions ( Full model vs. No action ).This suggests that different actions at the same site produce different outcomes, and SRPNet is ableto make use of the action information to model these differences. Third , looking at the performanceacross the three settings, both height map and RGB information are useful for accurate predictions.There are no trivial solutions of the form ‘space gets revealed where height is high’. Fourth , as weonly have a limited number of training samples, data augmentation strategies are effective. Supple-mentary Figure S3 visualizes the predictions from different versions of our model on samples fromthe test set. Note the nuances that our model is able to capture in contrast to the ablated versions.5.2 Online Evaluation for Looking Behind Plants TaskWe next measure the effectiveness of our proposed framework (PPG w/ SRPNet) for the task oflooking behind plant surface (as introduced in Section 3). We measure the space revealed in units ofcm2for vines and number of pixels for Dracaena.1We start out by demonstrating that PPG w/ SRPNet is able to differentiate between good and badactions. We then demonstrate our method on the task of looking behind plants over a 10 time-stephorizon. Finally, we tackle the task of revealing space behind a user-specified spatial target. Weconduct experiments on both the vines and the Dracaena plant. As plants can’t be exactly reset, allexperiments test generalization to some extent. To further test generalization, we explicitly evaluateperformance on plant configurations that differ from those encountered during training.Inexact resets also pose a challenge when comparing methods. We randomly reset the plant (suchas rotating the Dracaena) between trials and expect the variance due to inexact resets to average outover multiple trials. To prevent experimenter or other unknown environmental bias, we a) randomlyinterleave trials for different methods, and b) reset before revealing which method runs next.5.2.1 BaselinesTiling Baseline. For the long-horizon exploration tasks, this hand-crafted baseline randomly sam-ples from action candidates that are spread out across the workspace as shown in Figure 4. Weaid this baseline by limiting the action candidates to be horizontal pushes for the vines and tan-gential actions for the Dracaena. We found these actions to be more effective than actions in otherorientations, see Table S1 for vines and Table S3 for Dracaena.PPG w/ Other Dynamics Models. To disentangle if the improvement is coming from our learnedSRPNet or simply from keeping track of space that has already been revealed ( Ct) in PPG, weswap SRPNet for other models in PPG. Specifically, we compare to a hand-crafted dynamics model(described below) and the SRPNet No Image ( i.e.no RGB and no height map) model from Table 1.We construct hand-crafted forward models for vines and the Dracaena plant. These baseline mod-els represent vines as vertically hanging spaghetti, and the Dracaena plant as 2D radially emanatingspaghetti. Figure 5 shows the the induced free space upon action execution for this baseline model.5.2.2 ResultsMethod Area RevealedVines (cm2) Dracaena (pixels)Random Horizontal /211.7 [184.2, 255.8] 5125.6 [3423.8, 7147.0]Tangential ActionPPG w/ SRPNet (Our) 344.1 [320.4, 372.3] 7644.4 [6335.6, 9057.5]Table 2: PPG selected actions are more effectiveat revealing space.Single Action Selection Performance. Table 2compares the effectiveness of PPG w/ SRPNet atpicking an action that reveals the most occludedspace, against random actions from the robot’saction space. For the strongest comparison, welimit the random sampling to the most effectiveactions, horizontal pushes for the vines and tan-gential pushes for the Dracaena, as shown in Fig-ure 4. Table 2 reports the average space revealed(along with 95% confidence interval) over at least 20 trials for each method. Our approach leadsto a relative improvement of 62% for vines and 49% for Dracaena over this strong baseline. Thissuggests that our model is able to interpret visual observations to identify good interaction sites.1The criterion to automatically determine revealed space occasionally fails. We manually inspected testruns to confirm and fix the output of the automated method. Note, that this manual inspection is done duringevaluation only. No method has access to such manual inspection, neither during training nor during execution.6Figure 4: Tiling baseline. Cyan arrows show allaction candidates considered, and orange arrowsshow 10 actions selected during an execution.We aid the baseline by limiting the candidates tothe most effective actions (horizontal pushes forvines, tangential pushes for Dracaena).Figure 5: Hand-crafted dynamics model thatrepresents vines as vertically hanging spaghetti(left), and the Dracaena plant as 2D radially em-anating spaghetti (right). Cyan area representsthe space revealed by the action (red arrow) un-der this hand-crafted model.AugustJuneJuneAugustMethod Area Revealed (pixels)June AugustRandom Tangential Action 5125.6 [3423.8, 7147.0] 6780.7 [4399.3, 8618.1]PPG w/ SRPNet (Our) 7644.4 [6335.6, 9057.5] 11551.4 [9464.7, 14644.9]Figure 6: We evaluate a model trained with data col-lected from the June Dracaena on the August Dracaena.Generalization across Plant Growth Per-formance. To test generalization, we runthe single action selection experiment (Ta-ble 2) on the Dracaena plant after two-month of growth (in August) but with amodel trained on data from June. Note thedifference between plants in Figure 6 (top).Despite changes in appearance and leaflength, PPG generalizes to the grown Dra-caena and outperforms random tangentialaction as shown in Figure 6 (bottom).Long-horizon Exploration Performance.Next, we study if SRPNet can be used forsituations that require multiple sequentialinteractions to reveal space behind plants.The task is to maximize the cumulativespace revealed over a 10 time-step episode. This further tests the quality of SRPNet which nowalso needs to accurately predict where it thinks space will be revealed.We conduct 4 experiments, one on Dracaena and 3 on vines. For the vines, we considered 3 set-tings: a) Base Setting: vine setting as used for collecting training data, and 2 novel settings totest generalization: b) Sparse Vines, and c) Separated Vines. While the last two settings explicitlytest generalization, we note that the first setting also tests models on novel vine configurations notexactly seen in training. For Dracaena, we only conducted experiments in the Base Setting.(a) Base Setting (revealable space: 2735 ) cm2(b) Sparse Vines (revealable space: 2235 ) cm2(c) Separated Vines (revealable space: 2130 )cm2(d) Dracaena (revealable space: 174,380 pixels) Figure 7: Comparison of different methods for multi-step exploration of space behind plantfoliage. We show results in four settings across vines (left) and Dracaena (right). The line plotsshow average cumulative space revealed by actions up to time step tacross 10 trials (along with95% confidence intervals). SRPNet training data was collected in the base setting shown in (a) forvines and (d) for Dracaena. (b) and (c) are novel settings that test generalization capabilities of ourmodel. Our method (PPG w/ SRPNet) outperforms all baselines (a strong hand-crafted policy, andPPG with other dynamics models) across all settings.7Figure 7 plots the average space revealed (in cm2for vines and in pixels for Dracaena) as a functionof the number of time-steps. We report the mean over 10 trials and also show the 95% confidenceinterval. Across all three experiments our proposed method achieves the strongest performance.Supplementary Figure S4 and videos on the website show some sample executions.Results suggests that SRPNet is quite effective at predicting where space will get revealed (PPGw/ SRPNet vs.Tiling). Learning and planning via CEM lets us model complex behavior whichis hard to hand-craft. Improvements over the tiling baseline increase as the action space becomeslarger (Dracaena vs.vines). Moreover, benefits don’t just come because of keeping track of revealedspace (Ct), but also from the use of SRPNet (PPG w/ SRPNet vs.PPG w/ Handcrafted Dynamics).Furthermore, our model is able to interpret the nuances depicted in the visual information to predictgood actions (PPG w/ SRPNet vs.PPG w/ SRPNet No Image). SRPNet also leads to benefits innovel vine configurations. Benefits are larger in the separated vines case than for the sparse vines.This may be because the separated vines are still locally dense and SRPNet processes local patches.Figure 8: PPG is also effective at revealing spacebehind a specific spatial target.Targeted Revealing Performance. Our finalexperiment tackles the task of targeted explo-ration. The task here is to reveal space at auser-defined region, m. Figure 8 (left) shows asample user-selected region. We tackle this taskby settingC0to be m, the complement of theuser-defined region. Figure 8 (right) presents theresults (same legend as for Figure 7, but with-out Tiling Baseline). Again, as SRPNet reliablymodels the effect of actions, PPG with SRPNetoutperforms PPG with other dynamics models.6 DiscussionIn this paper, we introduced PPG and SRPNet to tackle the problem of manipulating the externalplant foliage to look within the plant (the plant-self occlusion problem ). SRPNet uses self-supervisedlearning to model what space is revealed upon execution of different actions on plants. This sidestepsthe difficulty in perception arising from dense foliage, thin structures, and partial information. PPGderives control strategies using SRPNet via CEM, to output sequence of actions that can incremen-tally explore space occluded by plants. Experiments on a physical platform demonstrate the benefitsof our proposed learning framework for tackling the plant self-occlusion problem.7 LimitationsWe believe ours is a unique and first-of-its-kind study, but it has its limitations. We note two failuremodes. First, PPG sometimes resamples an overly optimistic action (that doesn’t actually revealmuch space, so nothing changes and CEM returns a very similar next action) many times over with-out making progress. Second, as each individual push action doesn’t use visual feedback it can’trecover from say when a leaf slips from below the gripper. These may be mitigated by incorporat-ing spatial diversity while selecting actions and by learning closed loop leaf manipulation policiesthrough imitation. More generally, our overall approach relies on input from RGB-D cameras thatare known to perform poorly in the wild. This may be mitigated through use of specialized stereocameras built for farm settings [56]. Our techniques for automatic estimation of revealed space canbe improved further using recent point tracking models [57], and it may be useful to build modelsthat can predict and keep track of full 3D space. Experiments should be conducted with more diversereal plants. Future work should also rank actions from the perspective of the damage they cause tothe plant, perhaps via some tactile sensing [30]. Lastly, while autonomous agriculture provides apath towards sustainable agricultural practices, societal impact of such automation should be studiedbefore deployment.AcknowledgmentsImages in Figure 1 (top) have been taken from YouTube video 1 and YouTube video 2. This materialis based upon work supported by the USDA/NSF AIFARMS National AI Institute (USDA #2020-867021-32799), an NSF CAREER Award (IIS-2143873), an Amazon Research Award, and an NVidiaAcademic Hardware Grant. We thank Matthew Chang and Aditya Prakash for helpful feedback. Wethank Kevin Zhang for help setting up robot experiments.References[1] Adinor Jose Capellesso, Ademir Antonio Cazella, Abdon Luiz Schmitt Filho, Joshua Farley,and Diego Albino Martins. Economic and environmental impacts of production intensificationin agriculture: comparing transgenic, conventional, and agroecological maize crops. Agroe-cology and Sustainable Food Systems , 40(3):215–236, 2016.[2] H Charles J Godfray and Tara Garnett. Food security and sustainable intensification. Philo-sophical transactions of the Royal Society B: biological sciences , 369(1639):20120273, 2014.[3] Jonathan A Foley, Navin Ramankutty, Kate A Brauman, Emily S Cassidy, James S Gerber,Matt Johnston, Nathaniel D Mueller, Christine O’Connell, Deepak K Ray, Paul C West, et al.Solutions for a cultivated planet. Nature , 478(7369):337–342, 2011.[4] Aaron M Davis and Jordan Pradolin. Precision herbicide application technologies to decreaseherbicide losses in furrow irrigation outflows in a northeastern australian cropping system.Journal of agricultural and food chemistry , 64(20):4021–4028, 2016.[5] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximalpolicy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017.[6] Lerrel Pinto and Abhinav Gupta. Supersizing self-supervision: Learning to grasp from 50ktries and 700 robot hours. In Proceedings of the IEEE International Conference on Roboticsand Automation (ICRA) , pages 3406–3413. IEEE, 2016.[7] Sergey Levine, Peter Pastor, Alex Krizhevsky, Julian Ibarz, and Deirdre Quillen. Learninghand-eye coordination for robotic grasping with deep learning and large-scale data collection.The International journal of robotics research , 37(4-5):421–436, 2018.[8] Juan Jes ́us Rold ́an, Jaime del Cerro, David Garz ́on-Ramos, Pablo Garcia-Aunon, MarioGarz ́on, Jorge De Le ́on, and Antonio Barrientos. Robots in agriculture: State of art and prac-tical experiences. Service robots , pages 67–90, 2018.[9] C Wouter Bac, Eldert J Van Henten, Jochen Hemming, and Yael Edan. Harvesting robots forhigh-value crops: State-of-the-art review and challenges ahead. Journal of Field Robotics , 31(6):888–911, 2014.[10] Naveen Kumar Uppalapati, Benjamin Walt, Aaron J Havens, Armeen Mahdian, Girish Chowd-hary, and Girish Krishnan. A berry picking robot with a hybrid soft-rigid arm: Design and taskspace control. In Robotics: Science and Systems , page 95, 2020.[11] Wyatt McAllister, Denis Osipychev, Adam Davis, and Girish Chowdhary. Agbots: Weedinga field with a team of autonomous robots. Computers and Electronics in Agriculture , 163:104827, 2019.[12] Abhisesh Silwal, Francisco Yandun, Anjana Nellithimaru, Terry Bates, and George Kan-tor. Bumblebee: A path towards fully autonomous robotic vine pruning. arXiv preprintarXiv:2112.00291 , 2021.[13] Harry Freeman, Mohamad Qadri, Abhisesh Silwal, Paul O’Connor, Zachary Rubinstein,Daniel Cooley, and George Kantor. Autonomous apple fruitlet sizing and growth rate trackingusing computer vision. arXiv preprint arXiv:2212.01506 , 2022.[14] Abhisesh Silwal, Tanvir Parhar, Francisco Yandun, Harjatin Baweja, and George Kantor. A ro-bust illumination-invariant camera system for agricultural applications. In International Con-ference on Intelligent Robots and Systems , pages 3292–3298. IEEE, 2021.[15] Francisco Yandun, Abhisesh Silwal, and George Kantor. Visual 3d reconstruction and dynamicsimulation of fruit trees for robotic manipulation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition Workshops , pages 54–55, 2020.9[16] Arun Narenthiran Sivakumar, Sahil Modi, Mateus Valverde Gasparino, Che Ellis, Andres Ba-quero Velasquez, Girish Chowdhary, and Saurabh Gupta. Learned visual navigation for under-canopy agricultural robots. In Robotics: Science and Systems , 2021.[17] Andres Eduardo Baquero Velasquez, Vitor Akihiro Hisano Higuti, Mateus Valverde Gasparino,Arun Narenthiran Sivakumar, Marcelo Becker, and Girish Chowdhary. Multi-sensor fusionbased robust row following for compact agricultural robots. arXiv preprint arXiv:2106.15029 ,2021.[18] Ya Xiong, Yuanyue Ge, Lars Grimstad, and P ̊al J From. An autonomous strawberry-harvestingrobot: Design, development, integration, and field evaluation. Journal of Field Robotics , 37(2):202–224, 2020.[19] Nicola Strisciuglio, Radim Tylecek, Michael Blaich, Nicolai Petkov, Peter Biber, Jochen Hem-ming, Eldert van Henten, Torsten Sattler, Marc Pollefeys, Theo Gevers, et al. Trimbot2020:an outdoor robot for automatic gardening. In ISR 2018; 50th International Symposium onRobotics , pages 1–6. VDE, 2018.[20] Mark Presten, Yahav Avigal, Mark Theis, Satvik Sharma, Rishi Parikh, Shrey Aeron, SandeepMukherjee, Sebastian Oehme, Simeon Adebola, Walter Teitelbaum, et al. Alphagarden: Learn-ing to autonomously tend a polyculture garden. arXiv preprint arXiv:2111.06014 , 2021.[21] Soran Parsa, Bappaditya Debnath, Muhammad Arshad Khan, et al. Autonomous strawberrypicking robotic system (robofruit). arXiv preprint arXiv:2301.03947 , 2023.[22] Hanwen Kang, Hongyu Zhou, and Chao Chen. Visual perception and modeling for au-tonomous apple harvesting. IEEE Access , 8:62151–62163, 2020.[23] Chris Lehnert, Dorian Tsai, Anders Eriksson, and Chris McCool. 3d move to see: Multi-perspective visual servoing for improving object views with semantic segmentation. arXivpreprint arXiv:1809.07896 , 2018.[24] Lufeng Luo, Hanjin Wen, Qinghua Lu, Haojie Huang, Weilin Chen, Xiangjun Zou, ChenglinWang, et al. Collision-free path-planning for six-dof serial harvesting robot based on energyoptimal and artificial potential field. Complexity , 2018, 2018.[25] Christoph Schuetz, Joerg Baur, Julian Pfaff, Thomas Buschmann, and Heinz Ulbrich. Evalua-tion of a direct optimization method for trajectory planning of a 9-dof redundant fruit-pickingmanipulator. In Proceedings of the IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 2660–2666. IEEE, 2015.[26] Alessandra Tafuro, Bappaditya Debnath, Andrea M Zanchettin, and E Amir Ghalamzan.dpmp-deep probabilistic motion planning: A use case in strawberry picking robot. In In-ternational Conference on Intelligent Robots and Systems , pages 8675–8681. IEEE, 2022.[27] Johannes F. Elfferich, Dimitra Dodou, and Cosimo Della Santina. Soft robotic grippers forcrop handling or harvesting: A review. IEEE Access , 10:75428–75443, 2022.[28] Sariah Mghames, Marc Hanheide, and Amir Ghalamzan. Interactive movement primitives:Planning to push occluding pieces for fruit picking. In International Conference on IntelligentRobots and Systems , pages 2616–2623. IEEE, 2020.[29] Heramb Nemlekar, Ziang Liu, Suraj Kothawade, Sherdil Niyaz, Barath Raghavan, and Ste-fanos Nikolaidis. Robotic lime picking by considering leaves as permeable obstacles. InInternational Conference on Intelligent Robots and Systems , pages 3278–3284. IEEE, 2021.[30] Tapomayukh Bhattacharjee, Phillip M Grice, Ariel Kapusta, Marc D Killpack, Daehyung Park,and Charles C Kemp. A robotic system for reaching in dense clutter that integrates modelpredictive control, learning, haptic mapping, and planning. Georgia Institute of Technology,2014.[31] Marc D Killpack, Ariel Kapusta, and Charles C Kemp. Model predictive control for fastreaching in clutter. Autonomous Robots , 40:537–560, 2016.10[32] Shaoxiong Yao and Kris Hauser. Estimating tactile models of heterogeneous deformable ob-jects in real time. In Proceedings of the IEEE International Conference on Robotics and Au-tomation (ICRA) , 2023.[33] Huy Ha and Shuran Song. Flingbot: The unreasonable effectiveness of dynamic manipulationfor cloth unfolding. In Proceedings of the Conference on Robot Learning (CoRL) , pages 24–33.PMLR, 2022.[34] Zhenjia Xu, Cheng Chi, Benjamin Burchfiel, Eric Cousineau, Siyuan Feng, and Shuran Song.Dextairity: Deformable manipulation can be a breeze. In Robotics: Science and Systems , 2022.[35] Zixuan Huang, Xingyu Lin, and David Held. Mesh-based dynamics with occlusion reasoningfor cloth manipulation. In Robotics: Science and Systems , 2022.[36] Thomas Weng, Sujay Man Bajracharya, Yufei Wang, Khush Agrawal, and David Held. Fab-ricflownet: Bimanual cloth manipulation with a flow-based policy. In Proceedings of the Con-ference on Robot Learning (CoRL) , pages 192–202. PMLR, 2022.[37] Xingyu Lin, Yufei Wang, Jake Olkin, and David Held. Softgym: Benchmarking deep rein-forcement learning for deformable object manipulation. In Proceedings of the Conference onRobot Learning (CoRL) , pages 432–448. PMLR, 2021.[38] Cheng Chi, Benjamin Burchfiel, Eric Cousineau, Siyuan Feng, and Shuran Song. Itera-tive residual policy: for goal-conditioned dynamic manipulation of deformable objects. InRobotics: Science and Systems , 2022.[39] Ashvin Nair, Dian Chen, Pulkit Agrawal, Phillip Isola, Pieter Abbeel, Jitendra Malik, andSergey Levine. Combining self-supervised learning and imitation for vision-based rope ma-nipulation. In Proceedings of the IEEE International Conference on Robotics and Automation(ICRA) , pages 2146–2153. IEEE, 2017.[40] Haochen Shi, Huazhe Xu, Zhiao Huang, Yunzhu Li, and Jiajun Wu. Robocraft: Learning tosee, simulate, and shape elasto-plastic objects with graph networks. In Robotics: Science andSystems , 2022.[41] Xingyu Lin, Zhiao Huang, Yunzhu Li, Joshua B Tenenbaum, David Held, and Chuang Gan.Diffskill: Skill abstraction from differentiable physics for deformable object manipulationswith tools. In Proceedings of the International Conference on Learning Representations(ICLR) , 2022.[42] Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B Tenenbaum, and Antonio Torralba. Learningparticle dynamics for manipulating rigid bodies, deformable objects, and fluids. In Proceedingsof the International Conference on Learning Representations (ICLR) , 2019.[43] Chau Do, Camilo Gordillo, and Wolfram Burgard. Learning to pour using deep deterministicpolicy gradients. In International Conference on Intelligent Robots and Systems , pages 3074–3079. IEEE, 2018.[44] Connor Schenck, Jonathan Tompson, Sergey Levine, and Dieter Fox. Learning robotic manip-ulation of granular media. In Proceedings of the Conference on Robot Learning (CoRL) , pages239–248. PMLR, 2017.[45] Yunzhu Li, Jiajun Wu, Jun-Yan Zhu, Joshua B Tenenbaum, Antonio Torralba, and RussTedrake. Propagation networks for model-based control under partial observation. In Pro-ceedings of the IEEE International Conference on Robotics and Automation (ICRA) , pages1205–1211. IEEE, 2019.[46] Xingyu Lin, Carl Qi, Yunchu Zhang, Zhiao Huang, Katerina Fragkiadaki, Yunzhu Li, ChuangGan, and David Held. Planning with spatial-temporal abstraction from point clouds for de-formable object manipulation. In Proceedings of the Conference on Robot Learning (CoRL) ,2022.11[47] Andy Zeng, Shuran Song, Stefan Welker, Johnny Lee, Alberto Rodriguez, and ThomasFunkhouser. Learning synergies between pushing and grasping with self-supervised deep re-inforcement learning. In International Conference on Intelligent Robots and Systems , pages4238–4245. IEEE, 2018.[48] Michael I Jordan and David E Rumelhart. Forward models: Supervised learning with a distalteacher. Cognitive science , 16(3):307–354, 1992.[49] Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visualforesight: Model-based deep reinforcement learning for vision-based robotic control. arXivpreprint arXiv:1812.00568 , 2018.[50] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interac-tion through video prediction. Advances in Neural Information Processing Systems (NeurIPS) ,29, 2016.[51] Pulkit Agrawal, Ashvin V Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. Learningto poke by poking: Experiential learning of intuitive physics. Advances in Neural InformationProcessing Systems (NeurIPS) , 29, 2016.[52] Shuran Song, Andy Zeng, Johnny Lee, and Thomas Funkhouser. Grasping in the wild: Learn-ing 6DOF closed-loop grasping from low-cost demonstrations. IEEE Robotics and AutomationLetters (RA-L) , 5(3):4978–4985, 2020.[53] Sarah Young, Dhiraj Gandhi, Shubham Tulsiani, Abhinav Gupta, Pieter Abbeel, and LerrelPinto. Visual imitation made easy. Proceedings of the Conference on Robot Learning (CoRL) ,2020.[54] Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial onthe cross-entropy method. Annals of operations research , 134:19–67, 2005.[55] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networksfor biomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9,2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.[56] Abhisesh Silwal, Tanvir Parhar, Francisco Yandun, Harjatin Baweja, and George Kantor. A ro-bust illumination-invariant camera system for agricultural applications. In International Con-ference on Intelligent Robots and Systems , pages 3292–3298, 2021.[57] Adam W Harley, Zhaoyuan Fang, and Katerina Fragkiadaki. Particle video revisited: Trackingthrough occlusions using point trajectories. In Proceedings of the European Conference onComputer Vision (ECCV) , pages 59–75. Springer, 2022.[58] Kevin Zhang, Mohit Sharma, Jacky Liang, and Oliver Kroemer. A modular robotic arm controlstack for research: Franka-interface and frankapy. arXiv preprint arXiv:2011.02398 , 2020.12Push Past Green: Learning to Look Behind Plant Foliage by Moving ItSupplementary MaterialA Implementation Details for Vine ExperimentsA.1 Robot Action Space(c.1) Starting image(c.4) Push (c.2) Approach plant(c.3) Insert grabber into plant(c.5) Move grabber back (c.6) Retract grabber yx(x,y)(a) Push action in rectified viewdθ(b) Seven possible pushing directionsRegion to be revealedRobot reachable spacePush start locationPush end locationzFigure S1: Robot’s action space for vine setup. (a) shows the rectified image that we operate in,the region to be revealed (red box), and the region that the robot can reach (black box). The robotcan execute push actions that start at a pixel (x;y)in the rectified image and push a distance of dat an angle. We use 7 discrete push directions f0;=6;=3;=2;:::;gas shown in (b).(c.1)through (c.6) show a sample execution of the push action.The robot’s action space consists of non-prehensile pushing actions. As shown in Figure S1 (a),these actions are parameterized by (x;y;;d ). Such parameterization for pushing actions has beenused in past works, e.g.[51]. Here, (x;y)denotes the start location for the push interaction on theboard,denotes the push angle, and ddenotes the push length. As shown in Figure S1 (b), wesampleto be one of 7 angles from f0;=6;=3;=2;2=3;5=6;g. We do not sample anglesgreater than because pushing towards the bottom of the vines only drags down the vines and couldpull the board over. We assume that the grabber inserts deep enough into the vines to push the vinesbut not too far to knock it over; therefore, the pushes are planar actions executed with the same zvalue. We estimate the location and orientation of the board and establish a coordinate frame that isaligned with the board. Push locations and orientations are expressed in this coordinate frame. Weimplement these actions by moving the grabber through 4 waypoints, as shown in Figure S1 (c.2) toFigure S1 (c.5). In Figure S1 (c.4), we can see the effect of a randomly sampled action on the stateof the vines. We drive the Franka Emika robot between these waypoints using the Franka-interfaceand frankapy library [58].A.2 SRPNetFor the vine setup, we are unable to position the camera such that it is perpendicular to the board.Therefore, we design SRPNet to work on rectified images of the scene, such that the camera islooking straight at the vines. This corresponds to using a homography to transform the image suchthat the surface underneath the vines becomes fronto-parallel. We build the model to only reasonabout a 40cm40cm neighborhood around the action start location. Parts of the board get occludedbehind the robot arm as the robot executes the action. These occluded parts and area with no depthreadings are masked out for evaluation and training.A.3 Data CollectionThe robot’s actions are in the same fronto-parallel plane used for SRPNet as described earlier. Weestimate the space that can be safely reached by the robot ahead of time to make sure it is not closeto its joint limits during interactions. The resulting space is roughly 40cm40cm. We divide thefeasible space into a 2020grid. Action starting locations (x;y)are sampled at the centers of these13Push Angle 0=6=3=2 2=3 5=6 Full Dataset# Interactions 985 460 360 348 359 433 584 3529Mean area revealed (cm2)215.7 177.3 93.6 58.8 100.4 180.9 237.1 170.3Table S1: Statistics for the different push directions in the collected vine dataset. Collecteddataset reveals many aspects of the problem. For example, for vines, horizontal push actions ( 0and) are the most effective at this task.grid squares ( i.e., 400 possible starting locations). We sample push directions from the 7 possibleangles,f0;=6;2=6;:::; 6=6g, and push by 15cm clipping to the feasible space as necessary.Therefore, not all interactions have d= 15 ; for starting locations near the boundary, d<15.Our full dataset contains 3529 interactions (summed to roughly 30 hours) collected over 11 differentdays (nonconsecutive). This data includes 2571 interactions done specifically for the purpose ofdata collection. The remaining interactions come from when we were developing control algorithms.These don’t follow uniform sampling from the robot’s action space and are biased towards horizontalactions since the most effective actions for the baselines are often horizontal actions.We automatically compute the ground truth for training the model on the collected data. Specifically,we use color thresholding to determine when the surface beneath the vines has been fully exposed.We found this simple strategy to be reasonably robust. Note that while we train and use SRPNetto predict whether allvines were moved aside to reveal the board, we can process the data in otherways to also train the model for other tasks. For example, we can re-purpose the data for a task thatinvolves only looking beneath the first layer of vines. We can re-compute ground truth to identifylocations where the height decreased by (say) more than 5cm for such a task.A.4 Cross-entropy MethodOur CEM implementation uses 3 iterations that each evaluate 300 candidate actions. We sample(x;y; )from Gaussian distributions. In the first CEM iteration, x;y; are sampled from Gaussianswith different mean and variances, chosen to cover the whole action space. The parameters are thendiscretized to match the distribution from data collection. When sampling actions, we only retainaction samples that are feasible ( i.e.within the robot’s reachable space as shown in Figure S1 (a)).Elite samples are the top 20% candidates that have the most amount of new space revealed. Runningline 3 to 6 in Algorithm 1 (Section 4.3) for vines takes about 5 seconds.14B Implementation Details for Dracaena ExperimentsB.1 Robot Action SpaceThe robot’s action space for Dracaena is similar to that of vines. However, since the Dracaena leavesare at different heights, we define three possible zvalues that the grabber can insert to. The Dracaenaplant body is about 45cm tall so we defined the zvalues to be about 22.5, 17.5, and 12.5cm from thetop of the plant. For each zvalue, planar pushing actions (x;y;;d )are defined on a plane parallelto the ground. We sample from 8 possible angles: f0;=4;=2;3=4;;5=4;3=2;7=4g. Theangles are 45 degrees away from one another instead of 30 degrees as used in vines because we wantto keep the total number of possible actions reasonable.(c.1) Starting image(c.4) Push (c.2) Approach plant(c.3) Insert grabber into plant(c.5) Retract grabber (x,y)(a) Push action in topdown viewdθRegion to be revealedRobot reachable spaceyxzz(b) Eight possible pushing directionsFigure S2: Dracaena robot action space. Similar to Figure S1, (a)shows the image from thecamera, (b)shows the pushing directions, and (c)shows the sample execution of a push action.B.2 SRPNetSince the Kinect camera is looking down at the Dracaena plant, SRPNet does not work on rectifiedimages as it does for vines and instead takes in images from the camera as they are. We projectaction start locations into their image coordinates using the camera intrinsics and crop around thelocations to obtain local patches to input into the network. When training SRPNet, adding anotherhead to predict height decrease in addition to the binary classification head helps AP performance.We use Huber loss with = 0:1to provide an auxiliary loss to the network.B.3 Data CollectionThe reachable space of the robot in the Dracaena setup is roughly 57cm53cm and correspondsto a2927grid of 2cm cells. Similar to the vines’ setup, the action starting point (x;y)is sam-pled from these 783 possible locations. Given that pushing from the center of the plant tends todisplace it entirely, we aim to discourage such actions to prevent damage to areas where new leavesmay sprout. We manually delineate a rectangular region around the plant center and do not sam-ple or execute actions in this region. We also sample zfrom 3 possible values (22.5, 17.5, and12.5cm from the top of the plant as mentioned before), push directions from 8 possible angles,f0;=4;=2;3=4;;5=4;3=2;7=4g, and push by 15cm clipping to the feasible space as nec-essary. Therefore, not all interactions have d= 15 ; for starting locations near the boundary, d<15.Since the plant wobbles during pushing, we discount the area that is revealed due to whole-plantmovement. We construct plant point clouds before and after an action; then, iterative closest point(ICP) is performed to align the two point clouds. During execution, the robot body occludes parts15Push Angle 0=4=2 3=4 5=4 3=2 7=4Full Dataset# Interactions 257 262 295 273 249 297 289 253 2175Mean area revealed (pixels) 1391.4 1138.7 990.4 802.0 1154.0 1110.8 1154.9 1495.2 1147.7Table S2: Statistics for the different push directions in the collected Dracaena dataset.of the plant, so we mount a Intel RealSense camera at the wrist to fill in these occluded regions toaid ICP. Area where the plant height has decreased in the aligned point cloud is considered to berevealed space.B.4 Cross-entropy MethodWe follow the same algorithm as the one outlined in Algorithm 1 (Section 4.3). The Dracaena CEMuses 3 iterations that each evaluate 300 candidate actions. We sample (x;y;;z )from uniformdistributions within the robot’s reachable space. The parameters are then discretized to match thedata collection’s distribution. Top 20% candidates that reveal the most amount of new space arechosen as elite samples that are fitted with Gaussian distributions for the next iteration. Running oneiteration takes about 7 seconds.B.5 Comparing Tangential to Random ActionsMethod Area revealed (pixels)Random Action 3956.1 [2826:1,5045:9]Tangential Action 5125.6 [3423:8,7147:0]Table S3: Effectiveness of tangential actions. We execute actions tangent to Dracaena leaves inthe Tiling baseline because they reveal more space on average compare to random actions.We chose horizontal actions for the Tiling baseline of vines because they on average reveal the mostamount of space. In order to come up with a similar Tiling baseline for Dracaena, we observe thatleaves are pushed aside more easily when the grabber moves tangent to the leaves. We verify thattangent actions are better than random actions by comparing average space revealed upon executionof actions from the two methods. As shown in Table S3, tangential actions reveal more space thanrandom actions, so we use them in the Tiling baseline to test the effectiveness of PPG w/ SRPNetagainst this strong baseline.16C Visualizations(a) RGB Image (b) Height Image (c) Ground (d) SRPNet (e) SRPNet (f) SRPNetTruth (No Image) (No Action) (Full)Figure S3: Visualizations of output from our proposed SRPNet. We show examples from thetest set. The white regions in ground truth images represent space revealed by actions drawn as redarrows. Column (d) shows prediction from SRPNet without image input ( i.e.no RGB, no height),column (e) shows prediction from SRPNet without action input, and column (f) shows predictionsfrom SRPNet. The brighter the region, the higher the predicted probability of revealing space.Ground truth revealed space indicates the complexity of the task and suggests why the hand-crafteddynamics model (shown in Figure 5) performs poorly at this task. SRPNet is able to effectively usethe visual information to make good predictions.17t= 0 t= 1 t= 2 t= 3 t= 4View before Interaction Executed Action Space Revealed ( Ct+1) View before Interaction Executed Action Space Revealed ( Ct+1)Figure S4: First five time steps of a sample execution from our method. Top row shows the RGBimage before interaction, middle row shows the push action executed, and the bottom row shows thecumulative space revealed so far. Our model picks actions that are effective at revealing space.18 |
flyQ0v8cgC | Continual Vision-based Reinforcement Learning withGroup SymmetriesShiqi Liu1∗, Mengdi Xu1∗, Peide Huang1, Xilun Zhang1Yongkang Liu2, Kentaro Oguchi2, Ding Zhao11Carnegie Mellon University,2R&D, Toyota Motor North America,∗equal contribution{shiqiliu, mengdixu, peideh, xilunz, dingzhao }@andrew.cmu.edu{yongkang.liu, kentaro.oguchi }@toyota.comAbstract: Continual reinforcement learning aims to sequentially learn a variety oftasks, retaining the ability to perform previously encountered tasks while simulta-neously developing new policies for novel tasks. However, current continual RLapproaches overlook the fact that certain tasks are identical under basic group op-erations like rotations or translations, especially with visual inputs. They may un-necessarily learn and maintain a new policy for each similar task, leading to poorsample efficiency and weak generalization capability. To address this, we intro-duce a unique Continual Vision-bas edReinforcement Learning method that rec-ognizes Group Symmetries, called COVERS, cultivating a policy for each groupof equivalent tasks rather than an individual task. COVERS employs a proximal-policy-gradient-based (PPO-based) algorithm to train each policy, which containsan equivariant feature extractor and takes inputs with different modalities, includ-ing image observations and robot proprioceptive states. It also utilizes an unsu-pervised task clustering mechanism that relies on 1-Wasserstein distance on theextracted invariant features. We evaluate COVERS on a sequence of table-topmanipulation tasks in simulation and on a real robot platform. Our results showthat COVERS accurately assigns tasks to their respective groups and significantlyoutperforms baselines by generalizing to unseen but equivariant tasks in seen taskgroups. Demos are available on our project page1.Keywords: Continual Learning, Symmetry, Manipulation1 INTRODUCTIONQuick adaptation to unseen tasks has been a key objective in the field of reinforcement learning(RL) [1, 2, 3]. RL algorithms are usually trained in simulated environments and then deployedin the real world. However, pre-trained RL agents are likely to encounter new tasks during theirdeployment due to the non-stationarity of the environment [4, 5]. Blindly reusing policies obtainedduring training can result in substantial performance drops and even catastrophic failures [6, 7, 8].Continual RL (CRL), also referred to as lifelong RL, addresses this issue by sequentially learning aseries of tasks. It achieves this by generating task-specific policies for the current task, while simul-taneously preserving the ability to solve previously encountered tasks [3, 9, 10, 11, 12]. ExistingCRL works that rely on task delineations to handle non-stationary initial states, dynamics, or rewardfunctions can greatly boost task performance, particularly when significant task changes occur [10].However, in realistic task-agnostic settings, these delineations are unknown and have to be identifiedby the agents. In this work, we explore how to define and detect task delineations to enhance robots’learning capabilities in task-agnostic CRL .1Project Page: https://sites.google.com/view/rl-covers/ .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Equivariant Policy NetworkReflect Task ConfigurationReflectActionFigure 1: This example illustrates how groupsymmetry enhances adaptability. The robot isinstructed to close drawers situated in two dis-tinct locations given top-down images. The op-timal control policies are equivalent but mir-rored because of the symmetry of the drawers’locations around the robot’s position.Our key insight is that robotic control tasks typ-ically preserve certain desirable structures, suchasgroup symmetries . Existing CRL approachestypically delineate task boundaries based on sta-tistical measures, such as maximum a posterioriestimates and likelihoods [10, 11] in the originalobservation space. However, such implementa-tions overlook the geometric information inherentin task representations, which naturally emergein robotic control tasks, as demonstrated in Fig-ure 1. Consider the drawer-closing example: con-ventional CRL works using image inputs wouldtreat each mirrored configuration as a new taskand learn the task from scratch. Yet, we, as hu-mans, understand that the mirrored task configu-ration can be easily resolved by correspondinglyreflecting the actions. Learning the mirrored taskfrom scratch hampers positive task transfer andlimits the agent’s adaptivity. To address this issue, our goal is to exploit the geometric similarityamong tasks in the task-agnostic CRL setting to facilitate rapid adaptation to unseen but geometri-cally equivalent tasks.In this work, we propose COVERS, a task-agnostic vision-based CRL algorithm with strong sampleefficiency and generalization capability by encoding group symmetries in the state and action spaces.We define a task group as the set that contains equivalent tasks under the same group operation, suchas rotations and reflections. We state our main contributions as follows:1. COVERS grows a PPO-based [13] policy with an equivariant feature extractor for each taskgroup, instead of a single task, to solve unseen tasks in seen groups in a zero-shot manner.2. COVERS utilizes a novel unsupervised task grouping mechanism, which automaticallydetects group boundaries based on 1-Wasserstein distance in the invariant feature space.3. In non-stationary table-top manipulation environments, COVERS performs better thanbaselines in terms of average rewards and success rates. Moreover, we show that (a) thegroup symmetric information from the equivariant feature extractor promotes the adaptiv-ity by maximizing the positive interference within each group, and (b) the task groupingmechanism recovers the ground truth group indexes, which helps minimize the negativeinterference among different groups.2 Related WorkTask-Agnostic CRL. CRL has been a long-standing problem that aims to train RL agents adaptableto non-stationary environments with evolving world models [14, 15, 16, 17, 18, 7, 19, 20, 21, 22]. Intask-agnostic CRL where task identifications are unrevealed, existing methods have addressed theproblem through a range of techniques. These include hierarchical task modeling with stochasticprocesses [10, 11], meta-learning [3, 23], online system identification [24, 25], learning a repre-sentation from experience [12, 26], and experience replay [17, 27]. Considering that in realisticsituations, the new task may not belong to the same task distribution as past tasks, we develop anensemble model of policy networks capable of handling diverse unseen tasks, rather than relying ona single network to model dynamics or latent representations. Moreover, prior work often dependson data distribution-wise similarity or distances between latent variables, implicitly modeling taskrelationships. In contrast, we aim to introduce beneficial inductive bias explicitly by developingpolicy networks with equivariant feature extractors to capture the geometric structures of tasks.Symmetries in RL. There has been a surge of interest in modeling symmetries in components ofMarkov Decision Processes (MDPs) to improve generalization and efficiency [28, 29, 30, 31, 32, 33,2TimestepsDrawer CloseButton PressPlate SlideGoal ReachStreaming GroupsFigure 2: The continual learning environment setup involves four task groups, including Plate Slide,Button Press, Drawer Close, and Goal Reach. Groups streamingly come in.34, 35, 36, 37, 38, 39]. MDP homomorphic network [30] preserves equivariant under symmetries inthe state-action spaces of an MDP by imposing an equivariance constraint on the policy and valuenetwork. As a result, it reduces the RL agent’s solution space and increases sample efficiency. Thissingle-agent MDP homomorphic network is then extended to the multi-agent domain by factorizingglobal symmetries into local symmetries [31]. SO(2)-Equivariant RL [32] extends the discrete sym-metry group to the group of continuous planar rotations, SO(2), to boost the performance in roboticmanipulation tasks. In contrast, we seek to exploit the symmetric properties to improve the general-ization capability of task-agnostic CRL algorithms and handle inputs with multiple modalities.3 PreliminaryMarkov decision process. We consider a Markov decision process (MDP) as a 5-tuple(S,A, T, R, γ ), where SandAare the state and action space, respectively. T:S × A → ∆(S)is the transition function, R:S × A → Ris the reward function, and γis the discount factor. Weaim to find an optimal policy πθ:S → A parameterized by θthat maximizes the expected returnEτ∼πθhPH−1t=0γtr(st, at)i, where His the episode length.Invariance and equivariance. LetGbe a mathematical group. f:X → Y is a mapping function.For a transformation Lg:X → X that satisfies f(x) =f(Lg[x]),∀g∈G, x∈ X , we say fisinvariant to Lg. Equivariance is closely related to invariance. If we can find another transformationKg:Y → Y that fulfills Kg[f(x)] = f(Lg[x]),∀g∈G, x∈ X then we say fis equivariant totransformation Lg. It’s worth noting that invariance is a special case of equivariance.MDP with group symmetries. In MDPs with symmetries [28, 29, 30], we can identify at least onemathematical group Gof a transformation Lg:S → S and a state-dependent action transformationKsg:A → A , such that R(s, a) =RLg[s], Ksg[a], T(s, a, s′) =TLg[s], Ksg[a], Lg[s′]holdfor all g∈G, s, s′∈ S, a∈ A.Equivariant convolutional layer. LetGbe an Euclidean group, with the special orthogonal groupand reflection group as subgroups. We use the equivariant convolutional layer developed by Weilerand Cesa [40], where each layer consists of G-steerable kernels k:R2→Rcout×cinthat satisfiesk(gx) =ρout(g)k(x)ρing−1,∀g∈G, x∈R2.ρinandρoutare the types of input vector fieldfin:R2→Rcinand output vector field fout:R2→Rcout, respectively.Equivariant MLP. An equivariant multi-layer perceptron (MLP) consists of both equivariant linearlayers and equivariant nonlinearities. An equivariant linear layer is a linear function Wthat mapsfrom one vector space Vinwith type ρinto another vector space with type ρoutfor a given group G.Formally ∀x∈Vin,∀g∈G:ρout(g)Wx=Wρ in(g)x. Here we use the numerical method proposedby Finzi et al. [41] to parameterize MLPs that are equivariant to arbitrary groups.4 Methodology4.1 Problem FormulationWe focus on continual learning in table-top manipulation environments, where various tasks aresequentially presented. We hypothesize that the streaming tasks can be partitioned into task groups,each containing tasks that share symmetry with one another. We adopt a realistic setting where a3Algorithm 1 COVERS: Continual Vision-based RL with Group SymmetriesInput : Threshold dε, initial frame number k, update interval Nu, rollout step size NsOutput : collection of policies ΠInitialization : Current policy πcurinitialized as a random policy with a policy data buffer B ←∅,policy collection Π← {(πcur,B)}, number of episodes n←0, online rollout buffer D ←∅1:while task not finish do2: n←n+ 13: ifn%Nu= 0then4: Rollout buffer O ←∅ ▷Unsupervised Policy Assignment5: Rollout Nssteps with πcurand get trajectories τ={(s0, a0, . . . , s H−1, aH−1)}6: Append the first kframes of each episode to rollout buffer O ← { (s0, . . . , s k−1)}7: Append the whole episode trajectories τto the online rollout buffer D8: Calculate the 1-Wasserstein distances dWi(O,Bi),∀{πi,Bi} ∈Π(Equation 2)9: Get the minimum distance dWjwhere j= arg min idWi(O,Bi)10: ifdj> dεthen11: Initialize a new random policy πas well as its policy data buffer B ← O12: πcur←π,Π←Π∪ {{π,B}}13: else14: Assign the existing policy and buffer with πcur←πj,Bj← B j∪ O15: Update πcurbased on online rollout buffer D(Equation 1) ▷Equivariant Policy Update16: D ←∅17: else18: Sample an episode and append to online rollout buffer Dnew task group may emerge at each episode, the total number of distinct groups remains unknown,and the group may arrive in random orders. The primary objective is to devise an online learningalgorithm capable of achieving high performance across all tasks with strong data efficiency. Wevisualize our CRL setting with table-top manipulation environments in Figure 2.4.2 AlgorithmWe present the pseudocode for COVERS, a task-agnostic continual RL method with group sym-metries, in Algorithm 1. COVERS maintains a collection Π ={(π,B)}, each element of whichcomprising a pair of policy πand its respective data buffer B. Each policy πindependently managesone group of tasks, with Bstoring the initial frames of the group it oversees. At fixed time intervals,COVERS collects Nssteps in parallel under the current policy πcurand stores the first kframesfrom each episode in the rollout buffer O. Based on O, the algorithm then either (a) creates a newpolicy for an unseen group and adds it to the collection Π, or (b) recalls an existing policy from thecollection Πif the group has been previously encountered. It is worth noting that we assign poli-cies based on the initial frames of each episode rather than the full episode rollout. This is becauseframes corresponding to later timesteps are heavily influenced by the behavior policy and could eas-ily lead to unstable policy assignments. Only maintaining a subset of the rollout trajectories alsohelps alleviate memory usage.After the policy assignment, the selected policy πcurwith parameters θis updated based on an onlinerollout buffer Dand the PPO method [13] with loss in Equation 1. ˆAtis the estimated advantage,ρt=πθ(at|st)/πθold(at|st)is the importance ratio and εis the clip range.LCLIP =Eτ∼DhHXt=1min[ρt(θ)ˆAt,clip(ρt(θ),1−ε,1 +ε)ˆAt]i. (1)4.3 Policy Network ArchitectureCOVERS utilizes an equivariant policy network that comprises a policy network for predicting ac-tions, a value network approximating values, and an equivariant feature extractor taking multiplemodalities. We show the policy architecture in Figure 3 and additional details in Figure 10.4Equivariant Feature ExtractorInitialFrameCurrentFrameRobotState&AuxiliaryInfo.EquivariantConvolutionalNetworksEquivariantLinearNetworksEquivariantFeaturesGroupMaxPoolingInvariantFeaturesMLPValueActionDistanceMetricEquivariantMLPFigure 3: Equivariant policy network architecture.Equivariant feature extractor. In manipulation tasks, the observations typically comprise multiplemodalities, such as image observations, robot proprioceptive states, and goal positions representedin vector form. To accommodate these diverse modalities, we design an equivariant feature ex-tractor hequi, that employs an equivariant convolutional network heConv[40] for image processing,coupled with an equivariant linear network heMLP[42] to handle vector inputs. The resulting equiv-ariant features from these two pathways are concatenated to form the output of the feature extractor.Formally, hequi(s) =Concat (heConv(s), heMLP(s)).Invariant value and equivariant policy. In the context of MDPs involving robotic manipulationtasks with group symmetries, it is known that the optimal value function maintains group invari-ance, while the optimal policy displays group equivariance [32]. To attain this, both the policy andvalue networks utilize a shared equivariant feature extractor, designed to distill equivariant featuresfrom observations. Subsequently, the value network leverages a group pooling layer to transformthese equivariant features into invariant ones, before employing a fully connected layer to generatevalues. Formally, hinv(s) =GroupMaxPooling (hequi(s)). The policy network, on the other hand,processes the equivariant features with an additional equivariant MLP network to output actions.4.4 Unsupervised Dynamic Policy Assignmentrollout buffer Oh!"#$π!B$h!"#%π"B%d&$d&%online rollout Buffer DFigure 4: Calculation of 1-Wasserstein distance and updateof selected policy πj, whose databuffer has minimal distance to O.In COVERS, we propose to detect different groups of tasksbased on distances in the invariant feature space . Such amechanism facilitates knowledge transfer between tasks ineach group. At a fixed episode interval, COVERS selects thepolicy of the group, whose data buffer Bhas the minimal dis-tance in the invariant feature space to the rollout buffer Ocol-lected in the current environment. Note that the invariant fea-tures of both OandBare obtained through the feature extrac-tor of πas shown in Figure 4. Considering that OandBmayhave a different number of data pairs, we take a probabilisticperspective by treating those data buffers as sample-based rep-resentations of two distributions and use the Wasserstein distance to measure the distance betweenthose two feature distributions [43]. The invariant features are obtained from the equivariant featureextractor via a group max-pooling operation as shown in Figure 3.Wasserstein distance on invariant feature space. Here we show how to calculate the distance to agroup{πi,Bi} ∈Π. LetXandYbe matrices constructed by invariant features extracted from thestate buffer Biof size nand the buffer Oof size m.X= (X1, X2, ..., X n)T, Xp=hinvi(sp), p∈[n], sp∈ B, andY= (Y1, Y2, ..., Y m)T, Yl=hinvi(sl), l∈[m], sl∈ O. We use the 1-Wassersteindistance [44] to measure the distance between two empirical distributions XandY. Hence thedistance between OandBiisdWi(O,Bi) =W1(X,Y) = minγ⟨γ,M⟩Fs.t.γ1=a, γT1=b, γ≥0, (2)where Mp,l=∥Xp−Yl∥2,a= [1/n, . . . , 1/n],b= [1/m, . . . , 1/m].Mis the metric cost matrix.55 Simulation ExperimentsWe validate COVERS’s performance in robot manipulation [45] tasks with nonstationary environ-ments containing different objects or following different reward functions. We aim to investigatewhether our method can (1) recall stored policy when facing a seen group, as well as automaticallyinitialize a new policy when encountering an unseen group, (2) achieve similar or better performancecompared to baselines, and (3) understand the significance of key components of COVERS.5.1 EnvironmentRealSimEnvironment SetupOriginal Top-down ImageProcessed Top-down ImageCameraFigure 5: Image preprocessing tonarrow down the sim-to-real gap.Simulation setup. Our manipulation setup is composed offour groups of tasks. Each group contains four tasks, and alltasks within the same group exhibit rotational or reflectionalsymmetry with respect to each other. We build environmentsbased on the Meta-World benchmark [45]. Meta-World fea-tures a variety of table-top manipulation tasks that require in-teraction with diverse objects using a Sawyer robot. We showthe four groups of tasks in Figure 2 including Goal Reach forreaching a goal position, Button Press for pressing the buttonwith gripper, Drawer Close for closing drawer with gripper,andPlate Slide for sliding the plate to a goal position. Thegoal positions and object locations of tasks in each group aresymmetrically arranged around the center of the table. In ourexperiments, the four task groups arrive cyclically in order, asshown in Figure 2. The task order within each group and theinitial configuration of each task are randomized. We provideadditional setup details in Appendix B.3.States and actions. The agent receives four kinds of observa-tions: an RGB image captured by a top-down camera centeredover the table at each timestep, an RGB image captured by thesame camera at the beginning of the episode, the robot state, including gripper’s 3D coordinatesand opening angle, and auxiliary information. The RGB image at the initial step helps alleviate theocclusion problem caused by the movement of the robot. The auxiliary information contains 3Dgoal positions, which are only revealed to the agent in Goal Reach since the goal locations are notvisualized in the captured image and are masked out for other groups. To close the sim-to-real gap,we prepossess the RGB images by inpainting robot arms motivated by [46], with details deferredto Section B.1. A comparison of the original and processed images is visualized in Figure 5. Theaction is a four-dimensional vector containing the gripper’s 3D positions and its opening angle. Con-sidering that we utilize two distinct robots: Sawyer in the simulation and Kinova in the real world,such an action space and the image preprocessing mechanism help improve transferability.5.2 Baselines and AblationsWe compare COVERS with different methods detailed as follows. 3RL [26], an acronym forReplay-based Recurrent RL, is a state-of-the-art method in CRL with Meta-World tasks that in-tegrates experience replay [17] and recurrent neural networks [47]. Note that we augment 3RL witha convolutional neural network (CNN) to handle image inputs. In contrast, CLEAR [17], a com-mon baseline of CRL, only utilizes the experience replay by maintaining a memory buffer to storethe experience of the past tasks and oversamples the current tasks to boost the performance in thecurrent one. Equi utilizes a single policy with an equivariant feature extractor to solve all tasks.CNN utilizes a single policy with a CNN-based feature extractor as a vanilla baseline. We providethe detailed implementation of baselines and hyperparameters in Section B.We compare with two ablation methods. COVERS-GT uses ground truth group labels to assignpolicies to different groups, which helps ablate the performance of our proposed policy assignmentmechanism. COVERS-CNN utilizes a vanilla CNN block as the image feature extractor to helpablate the effect of using equivariant feature extractors.6Figure 6: Training curves for COVERS and other methods. Each background color correspondsto one task group. Each curve is averaged over 5 runs, and the shaded area shows the confidenceinterval of 95%. COVERS shows similar performance with COVERS-GT, which utilizes additionalground truth group indices, and substantially outperforms other baselines.Figure 7: The selected policies at each episode of COVERS. Each background color corresponds toone task group. The assigned policy indexes remain in alignment with the ground truth ones.6 Simulation Results and Ablations6.1 ResultsDynamic policy assignments. Figure 7 shows that when the environment switches to a new group,COVERS quickly detects changes and initializes a new policy for the group. Our method alsorecalls the corresponding policy from the collection when facing the same group again. Overall, thedynamic policy assignments generated by COVERS align well with the ground truth group labels.However, we observe some instances where the policy assignment does not match the ground truth.This could potentially be attributed to the fact that the feature extractor of each policy may not beable to capture representative features for each group during the early stages of training. Notably,the rate of such misclassifications significantly reduces as the number of training episodes increases.Training performance. We show the training curves of all methods in Figure 6 and the quantitativeperformance in Table 2, including the average success rates and mean rewards. COVERS achievesa much higher episode reward and success rate consistently in different groups than baselines. It isworth noting that although 3RL performs worse than COVERS, it achieves better performance thanbaselines with implicit task representations, including Equi, CLEAR, and CNN. This indicates thatthe explicit task representation used by 3RL, which maps transition pairs to latent variables usingan RNN, facilitates the revelation of partial task identifications, thereby enhancing performance. Itunderscores the significance of task-specific representations in CRL. In the early stages of training,there isn’t a significant performance difference between COVERS and Equi. However, as trainingprogresses, COVERS begins to outperform Equi. This is because COVERS avoids the problem offorgetting through the retraining of policies for each previously encountered task group. A compari-son between CNN and Equi reveals that incorporating group symmetries as inductive bias within theequivariant network significantly enhances sample efficiency. This is achieved by only optimizingthe policy for the abstracted MDP of each task group.6.2 Ablation StudyThe effect of group symmetric information. COVERS-CNN without the invariant feature extrac-tor demonstrates lower episodic rewards and success rates when compared with COVERS as shownin Table 1 and Figure 6. From these results, we conclude that the equivariant feature extractor signif-icantly enhances performance by modeling group symmetry information by introducing beneficialinductive bias through its model architecture.7Table 1: Quantitative results showing performances at convergence for different methods, includingthe average performance over five runs as well as the confidence interval of 95%.Methods COVERS 3RL CLEAR CNN Equi COVERS-GT COVERS-CNNPlate SlideSuccess Rate 0.97±0.02 0.28±0.06 0 .06±0.03 0 .03±0.02 0 .02±0.02 0 .91±0.03 0 .62±0.05Ave. Reward 344.04±12.89 101.20±7.35 65 .65±2.23 23 .44±1.14 64 .02±5.85 337 .44±13.87 232 .25±14.24Button PressSuccess Rate 0.87±0.04 0 .52±0.06 0 .31±0.06 0 .09±0.03 0 .01±0.01 0.87±0.04 0.26±0.05Ave. Reward 323.41±3.48 260 .80±6.86 138 .78±12.23 91 .34±9.34 121 .13±7.02330.56±2.63 181.21±10.83Drawer CloseSuccess Rate 0.82±0.04 0 .40±0.06 0 .27±0.05 0 .16±0.04 0 .40±0.05 0.98±0.02 0.56±0.05Ave. Reward 400.09±6.18 280 .62±6.39 216 .08±7.68 116 .33±10.1 273 .26±9.67 417.38±5.6 227.3±13.0Goal ReachSuccess Rate 0.98±0.02 0.60±0.06 0 .58±0.06 0 .14±0.04 0 .47±0.05 0 .97±0.02 0 .97±0.02Ave. Reward 483.53±1.35 322 .23±17.33 293 .5±16.16 151 .24±14.31 306 .72±20.34488.02±0.35 480.96±1.05AverageSuccess Rate 0.91±0.02 0 .44±0.03 0 .30±0.03 0 .1±0.02 0 .22±0.02 0.93±0.01 0.60±0.03Ave. Reward 387.77±5.02 241 .21±7.39 178 .5±7.58 95 .59±5.59 191 .28±8.23393.35±5.19 280.43±8.49The effect of the dynamic policy assignment module. In Figure 6, COVERS’s training curve issimilar to COVERS-GT, which uses ground truth group indexes as extra prior knowledge. Table 1shows that the performance drop due to misclassification is minor considering the small standarddeviation and COVERS’s performance is within one or two standard deviations of COVERS-GT.7 Real-world ValidationReal-world setup. Our real-world experiment setuputilizes a Kinova GEN3 robotic arm with a Robotiq 2F-85 gripper. The top-down RGB image is captured withan Intel RealSense D345f. Gripper’s coordinates andopening angle are obtained through the robot’s internalsensors. The real robot setups are demonstrated in Fig-ure 8. We directly deploy the trained policies in simu-lation to the real world. Table 2 shows average successrates across 20 trials and shows that our trained policieshave strong generalization capability to real-world sce-narios. The performance drop compared with simula-tion experiments may be due to the inconsistent visualfeatures and different scales of robots’ action spaces.Task Groups Success RatePlate Slide 0.45±0.15Button Press 0.60±0.15Drawer Close 0.65±0.15Goal Reach 0.92±0.07Table 2: Real-world validation results.Figure 8: The real Kinova GEN3 setup withfour task groups. The goal point marked inthe figure is only disclosed to the agent inGoal Reach as auxiliary information.8 ConclusionWe propose COVERS, a novel Vision-based CRL framework that leverages group symmetries tofacilitate generalization to unseen but equivalent tasks under the same group operations. COVERSdetects group boundaries in an unsupervised manner based on invariant features and grows policiesfor each group of equivalent tasks instead of a single task. We show that COVERS assigns tasksto different groups with high accuracy, has a strong generalization capability, and maintains thecapability to solve seen groups, outperforming baselines by a large margin.Limitation: One limitation of COVERS is that the memory it occupies grows linearly with thenumber of task groups. However, it is worth noting that COVERS still occupies less memory thanmaintaining a policy buffer for each task by only storing representative data frames such as the ini-tial frames for each task group. Another limitation is that although assuming a top-down camerawith a fixed base is widely adopted in existing works, it is hard to fulfill outside of labs. It wouldbe interesting to incorporate more general group operations, such as affine transformation and do-main randomization techniques, to handle deformed images. Moreover, we only experimented withgroups with equivariance structures. COVERS’s performance is unknown in more complex scenar-ios with both equivariant and non-equivariant tasks.8ACKNOWLEDGMENTThe authors gratefully acknowledge the support from the National Science Foundation (under grantsCNS-2047454) and research grant from the Toyota Motor North America. The ideas, opinions, andconclusions presented in this paper are solely those of the authors.References[1] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deepnetworks. In International Conference on Machine Learning , pages 1126–1135. PMLR, 2017.[2] A. Nagabandi, C. Finn, and S. Levine. Deep online learning via meta-learning: Continualadaptation for model-based rl. arXiv preprint arXiv:1812.07671 , 2018.[3] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn. Learningto adapt in dynamic, real-world environments through meta-reinforcement learning. arXivpreprint arXiv:1803.11347 , 2018.[4] M. Xu, P. Huang, F. Li, J. Zhu, X. Qi, K. Oguchi, Z. Huang, H. Lam, and D. Zhao. Scalablesafety-critical policy evaluation with accelerated rare event sampling. In 2022 IEEE/RSJ In-ternational Conference on Intelligent Robots and Systems (IROS) , pages 12919–12926. IEEE,2022.[5] P. Huang, M. Xu, F. Fang, and D. Zhao. Robust reinforcement learning as a stackelberg gamevia adaptively-regularized adversarial training. arXiv preprint arXiv:2202.09514 , 2022.[6] W. Zhao, J. P. Queralta, and T. Westerlund. Sim-to-real transfer in deep reinforcement learningfor robotics: a survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI) ,pages 737–744. IEEE, 2020.[7] K. Khetarpal, M. Riemer, I. Rish, and D. Precup. Towards continual reinforcement learning:A review and perspectives. Journal of Artificial Intelligence Research , 75:1401–1476, 2022.[8] P. Huang, X. Zhang, Z. Cao, S. Liu, M. Xu, W. Ding, J. Francis, B. Chen, and D. Zhao. Whatwent wrong? closing the sim-to-real gap via differentiable causal discovery. arXiv preprintarXiv:2306.15864 , 2023.[9] K. Khetarpal, M. Riemer, I. Rish, and D. Precup. Towards continual reinforcement learning:A review and perspectives. arXiv preprint arXiv:2012.13490 , 2020.[10] M. Xu, W. Ding, J. Zhu, Z. Liu, B. Chen, and D. Zhao. Task-agnostic online reinforcementlearning with an infinite mixture of gaussian processes. Advances in Neural Information Pro-cessing Systems , 33:6429–6440, 2020.[11] H. Ren, A. Sootla, T. Jafferjee, J. Shen, J. Wang, and H. Bou-Ammar. Reinforcement learningin presence of discrete markovian context evolution. arXiv preprint arXiv:2202.06557 , 2022.[12] A. Xie, J. Harrison, and C. Finn. Deep reinforcement learning amidst continual structured non-stationarity. In International Conference on Machine Learning , pages 11393–11403. PMLR,2021.[13] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[14] S. Thrun and T. M. Mitchell. Lifelong robot learning. Robotics and autonomous systems , 15(1-2):25–46, 1995.[15] F. Tanaka and M. Yamamura. An approach to lifelong reinforcement learning through multipleenvironments. In 6th European Workshop on Learning Robots , pages 93–99, 1997.9[16] Z. Chen and B. Liu. Lifelong machine learning. Synthesis Lectures on Artificial Intelligenceand Machine Learning , 12(3):1–207, 2018.[17] D. Rolnick, A. Ahuja, J. Schwarz, T. Lillicrap, and G. Wayne. Experience replay for continuallearning. Advances in Neural Information Processing Systems , 32, 2019.[18] M. Xu, Z. Liu, P. Huang, W. Ding, Z. Cen, B. Li, and D. Zhao. Trustworthy reinforce-ment learning against intrinsic vulnerabilities: Robustness, safety, and generalizability. arXivpreprint arXiv:2209.08025 , 2022.[19] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan,J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting inneural networks. Proceedings of the national academy of sciences , 114(13):3521–3526, 2017.[20] S. Powers, E. Xing, E. Kolve, R. Mottaghi, and A. Gupta. Cora: Benchmarks, baselines, andmetrics as a platform for continual reinforcement learning agents. In Conference on LifelongLearning Agents , pages 705–743. PMLR, 2022.[21] H. Ahn, S. Cha, D. Lee, and T. Moon. Uncertainty-based continual learning with adaptiveregularization. Advances in neural information processing systems , 32, 2019.[22] R. Traor ́e, H. Caselles-Dupr ́e, T. Lesort, T. Sun, G. Cai, N. D ́ıaz-Rodr ́ıguez, and D. Fil-liat. Discorl: Continual reinforcement learning via policy distillation. arXiv preprintarXiv:1907.05855 , 2019.[23] S. Sæmundsson, K. Hofmann, and M. P. Deisenroth. Meta reinforcement learning with latentvariable gaussian processes. arXiv preprint arXiv:1803.07551 , 2018.[24] W. Yu, J. Tan, C. K. Liu, and G. Turk. Preparing for the unknown: Learning a universal policywith online system identification. arXiv preprint arXiv:1702.02453 , 2017.[25] M. Xu, P. Huang, Y . Niu, V . Kumar, J. Qiu, C. Fang, K.-H. Lee, X. Qi, H. Lam, B. Li, et al.Group distributionally robust reinforcement learning with hierarchical latent variables. InInternational Conference on Artificial Intelligence and Statistics , pages 2677–2703. PMLR,2023.[26] M. Caccia, J. Mueller, T. Kim, L. Charlin, and R. Fakoor. Task-agnostic continual reinforce-ment learning: In praise of a simple baseline. arXiv preprint arXiv:2205.14495 , 2022.[27] A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ran-zato. Continual learning with tiny episodic memories. 2019.[28] B. Ravindran and A. G. Barto. Symmetries and model minimization in markov decision pro-cesses, 2001.[29] B. Ravindran and A. G. Barto. Approximate homomorphisms: A framework for non-exactminimization in markov decision processes. 2004.[30] E. van der Pol, D. Worrall, H. van Hoof, F. Oliehoek, and M. Welling. Mdp homomorphicnetworks: Group symmetries in reinforcement learning. Advances in Neural Information Pro-cessing Systems , 33:4199–4210, 2020.[31] E. van der Pol, H. van Hoof, F. A. Oliehoek, and M. Welling. Multi-agent mdp homomorphicnetworks. arXiv preprint arXiv:2110.04495 , 2021.[32] D. Wang, R. Walters, and R. Platt. So (2) equivariant reinforcement learning. In Internationalconference on learning representations (ICLR) , 2022.[33] D. Wang, R. Walters, X. Zhu, and R. Platt. Equivariant qlearning in spatial action spaces. InConference on Robot Learning , pages 1713–1723. PMLR, 2022.10[34] L. Zhao, X. Zhu, L. Kong, R. Walters, and L. L. Wong. Integrating symmetry into differentiableplanning with steerable convolutions. In The Eleventh International Conference on LearningRepresentations , 2023.[35] D. Wang, J. Y . Park, N. Sortur, L. L. Wong, R. Walters, and R. Platt. The surprising effective-ness of equivariant models in domains with latent symmetry. arXiv preprint arXiv:2211.09231 ,2022.[36] X. Zhu, D. Wang, O. Biza, G. Su, R. Walters, and R. Platt. Sample efficient grasp learningusing equivariant models. arXiv preprint arXiv:2202.09468 , 2022.[37] T. Cohen and M. Welling. Group equivariant convolutional networks. In International confer-ence on machine learning , pages 2990–2999. PMLR, 2016.[38] F. Fuchs, D. Worrall, V . Fischer, and M. Welling. Se (3)-transformers: 3d roto-translationequivariant attention networks. Advances in Neural Information Processing Systems , 33:1970–1981, 2020.[39] M. J. Hutchinson, C. Le Lan, S. Zaidi, E. Dupont, Y . W. Teh, and H. Kim. Lietransformer:Equivariant self-attention for lie groups. In International Conference on Machine Learning ,pages 4533–4543. PMLR, 2021.[40] M. Weiler and G. Cesa. General e (2)-equivariant steerable cnns. Advances in Neural Infor-mation Processing Systems , 32, 2019.[41] M. Finzi, M. Welling, and A. G. Wilson. A practical method for constructing equivariantmultilayer perceptrons for arbitrary matrix groups. In International Conference on MachineLearning , pages 3318–3328. PMLR, 2021.[42] G. Cesa, L. Lang, and M. Weiler. A program to build e(n)-equivariant steerable CNNs. InInternational Conference on Learning Representations , 2022. URL https://openreview.net/forum?id=WE4qe9xlnQw .[43] P. Huang, M. Xu, J. Zhu, L. Shi, F. Fang, and D. Zhao. Curriculum reinforcement learn-ing using optimal transport via gradual domain adaptation. Advances in Neural InformationProcessing Systems , 35:10656–10670, 2022.[44] V . I. Bogachev and A. V . Kolesnikov. The monge-kantorovich problem: achievements, con-nections, and perspectives. Russian Mathematical Surveys , 67(5):785, 2012.[45] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onRobot Learning , pages 1094–1100. PMLR, 2020.[46] S. Bahl, A. Gupta, and D. Pathak. Human-to-robot imitation in the wild. arXiv preprintarXiv:2207.09450 , 2022.[47] B. Bakker. Reinforcement learning with long short-term memory. Advances in neural infor-mation processing systems , 14, 2001.[48] P. I. Etingof, O. Golberg, S. Hensel, T. Liu, A. Schwendner, D. Vaintrob, and E. Yudovina.Introduction to representation theory , volume 59. American Mathematical Soc., 2011.[49] D. Steinley. Properties of the hubert-arable adjusted rand index. Psychological methods , 9(3):386, 2004.11A Brief Introduction to Group and Representation TheoryIn this section, we briefly introduce Group and Representation Theory [48] to help understand thepolicy structure in Section B.2.Linear group representations describe abstract groups in terms of linear transformations on somevector spaces. In particular, they can be used to represent group elements as linear transformations(matrices) on that space. A representation of a group Gon a vector space Vis a group homomor-phism from GtoGL(V), the general linear group on V . That is, a representation is a mapρ:G→GL (V),such that ρ(g1g2) =ρ(g1)ρ(g2),∀g1, g2∈G. (3)Here Vis the representation space, and the dimension of Vis the dimension of the representation.A.1 Trivial RepresentationTrivial representation maps any group element to the identity, i.e.∀g∈G, ρ(g) = 1 . (4)A.2 Irreducible RepresentationsA representation of a group Gis said to be irreducible (shorthand as irrep ) if it has no non-trivialinvariant subspaces. For example, given a group Gacting on a vector space V,Vis said to beirreducible if the only subspaces of Vpreserved under the action of every group element are the zerosubspace and Vitself. The trivial representation is an irreducible representation and is common toall groups.A.3 Regular RepresentationGiven a group G, the regular representation is a representation over a vector space Vwhich has abasis indexed by the elements of G. In other words, if Ghasnelements (if Gis finite), then theregular representation is a representation on a vector space of dimension n. An important fact aboutthe regular representation is that it can be decomposed into irreducible representations in a verystructured way.A.4 Dihedral GroupThe dihedral group Dnis the group of symmetries of a regular n-sided polygon, including nrotationsandnreflections. Thus, Dnhas2nelements. For example, the dihedral group of a square ( D4)includes 4 rotations and 4 reflections, giving 8 transformations in total.B Additional Experiment DetailsB.1 Image InpaintingTo close the sim-to-real gap, we employ a pre-processing technique on camera images, which in-volves in-painting robotic arms. The process begins by capturing a background image in whichthe robotic arm is absent from the camera’s view. For every time step, a mask that represents theposition of each robotic limb is generated, leveraging the 3D locations of individual joints and theprojection matrix of the camera. With this mask, we can select all areas devoid of the robotic arm,and subsequently update the background image accordingly. The images are subjected to a colorcorrection process to mitigate any potential color deviations attributable to lighting or reflection.Lastly, a distinct blue circle is overlaid at the gripper’s position on the background image to indicatethe gripper’s location. The entire image in-painting process is shown in Figure 9.12Figure 9: Image inpainting process.B.2 Detailed Policy ArchitectureIn this section, we present the detailed model architecture including the model sizes and the types ofeach layer in Figure 10.In order to make our policy network equivariant under transformations from the finite group D2,we need to choose the appropriate representation for both the network input and output, while alsoensuring that the network architecture and operations preserve this equivariance.The image input is encoded using the trivial representation. The robot state, on the other hand, isencoded with a mixture of different representations: the gripper’s position on the z-axis and thegripper’s open angle are encoded with the trivial representation since they are invariant to groupactions in D2. The gripper’s location on the x and y-axes, however, are encoded with two differentnon-trivial irreducible representations because their values are equivariant to group actions in D2.The value output is encoded with the trivial representation since the optimal value function shouldbe invariant to group actions [32]. Finally, the action output is encoded with a mixture of differentrepresentations. For actions, the gripper movement along the z-axis and the gripper’s opening angleare encoded with the trivial representation, while the gripper’s location on the x and y-axes areencoded with two different non-trivial irreducible representations, aligning with the input encoding.The distance metric is encoded with trivial representation through the group pooling operation.Figure 10: Detailed equivariant policy network architecture. ReLU nonlinearity is omitted in thefigure. A layer with a suffix of R indicates the layer output is in the regular representation. A layerwith a suffix of T indicates the layer output is in the trivial representation. A layer with a suffix of’mix’ means the layer output combines different representations.13B.3 Randomness of TasksIn each task across all groups, we introduce randomness to the object’s initial position and goal posi-tion by adding a perturbation value sampled from a uniform distribution with a range (−0.02,0.02)in meters. We list the perturbed features as follows:•Goal Reach : thexyzcoordinate of the goal position.•Button Press : thexycoordinate of the button’s initial location.•Drawer Close : thexycoordinate of the drawer’s initial location.•Plate Slide : thexycoordinate of the plate’s destination.B.4 Implementation of CLEARThe CLEAR algorithm [17] addresses the challenge of continual learning by putting data frompreceding tasks in a buffer, utilized subsequently for retraining. This method effectively deceleratesthe rate of forgetting by emulating a continuous learning setting. The specific network architecturefor CLEAR is illustrated in Figure 11. To make CLEAR able to process both images and robotstate as input, we introduce a feature extractor, which harmoniously integrates a CNN and an MLPnetwork. This composite feature extractor is carefully designed to contain a similar quantity oflearnable parameters to our Equivariant feature extractor.Figure 11: Network architecture for CLEAR. In (a) we show the network architecture of the actornetwork and the critic network. In (b) we show the structure of the feature extractor, which consistsof both a CNN network and an MLP network. ReLU nonlinearity is omitted in the figure.B.5 Implementation of 3RLThe 3RL algorithm [26] can be seen as an improved version of CLEAR, wherein additional historicaldata is provided to the actor and critic from a dedicated context encoder. This historical data includes(si, ai, ri), and the context encoder extracted task specificities from the history data with an RNNnetwork. The specific network architecture for 3RL is illustrated in Figure 12.B.6 HyperparametersWe show the hyperparameters of our proposed COVERS in Table 3. Moreover, we show the hyper-parameters of baselines in Table 4.14Figure 12: Network architecture for 3RL. In (a), we illustrate the structure of both the actor and criticnetworks, whereas (b) highlights the configuration of the context encoder, comprising a featureextractor and GRUs. It’s noteworthy that the feature extractor has the same architecture as theCLEAR algorithm, as shown in Figure 11.Table 3: COVERS HyperparameterHyperparameters ValueWasserstein distance threshold dε1.0Initial frame number k 4Update interval Nu 1000Rollout buffer size Ns 1000Batch size 64Number of epochs 8Discount factor 0.99Optimizer learning rate 0.0003Likelihood ratio clip range ε 0.2Advantage estimation λ 0.95Entropy coefficient 0.001Max KL divergence 0.05Table 4: CLEAR and 3RL HyperparameterHyperparameters ValueCommon hyperparameterReplay buffer size 200000Discount factor 0.95Burn in period 20000Warm up period 1000Batch size 512Gradient clipping range (−1.0,+1.0)Learning rate 0.0003Entropy regularization coefficient 0.0053RL Specific HyperparametersRNN’s number of layers 1RNN’s context size 30RNN’s context length 515C Additional Ablation StudyC.1 Sensitivity Analysis of Different MetricsIn Section 4.4, we used the 1-Wasserstein distance to measure the distance between those two fea-ture distributions. In this section, we compared the 1-Wasserstein distance with two other metrics:Euclidean distance and Mahalanobis distance. We present the qualitative results in Figure 13a, 13band 13c.Adjusted Rand Index (ARI). To evaluate how different metric affects the algorithm performance,besides evaluating the converged performance, we further evaluated the Adjusted Rand Index (ARI)[49] of the policy ID and group ID during the training progress. The ARI value measures thesimilarity between two clusterings by considering all pairs of samples. It counts pairs assigned tothe same or different clusters in both the predicted and true clusterings. Here we used the groupindex as the ground truth label for each episode, while the policy index as the predicted label. Thenwe compute the ARI value between two clustering of the entire training process. An ARI valuecloser to 1.0indicates a more accurate clustering result.Euclidean distance (or L2 norm). The Euclidean distance between point qandpisd(p, q) =p(p−q)2. (5)To compute the Euclidean distance between features of the state buffer Biof size nand the bufferO, we computed the mean vector xandyof buffer XandY, the Euclidean distance is simplyd(x, y) =p(x−y)2. Here XandYbe matrices constructed by invariant features extracted fromthe state buffer Biof size nand the buffer Oof size m, as shown in Section 4.4. Here we test fivedifferent threshold dεand run three random seeds for each threshold. The quantitative convergedperformance is shown in Table 5.Table 5: Quantitative results showing performances at convergence for different Euclidean distancethreshold dε, including the average performance over three runs as well as the confidence interval of95%.Threshold dε 0.3 0.4 0.5 0.6 0.7Plate SlideSuccess Rate 0.91±0.04 0 .92±0.04 0 .93±0.04 0.97±0.03 0.96±0.03Ave. Reward 332.27±19.4 338 .96±19.51 342 .46±19.2 346 .63±20.03348.02±20.68Button PressSuccess Rate 0.83±0.06 0 .73±0.07 0 .77±0.07 0 .95±0.03 0.97±0.02Ave. Reward 323.49±6.92 304 .63±9.71 298 .75±10.67334.18±2.08 332.27±3.0Drawer CloseSuccess Rate 0.93±0.04 0 .98±0.02 0 .73±0.07 0 .87±0.05 0.98±0.02Ave. Reward 413.04±9.03 432 .22±6.14 387 .01±11.63 409 .37±10.19 454.7±3.25Goal ReachSuccess Rate 0.99±0.01 0 .99±0.01 0.99±0.01 0.99±0.01 0 .82±0.06Ave. Reward 488.43±0.18 486 .82±0.59488.89±0.11 488.82±0.08 480 .46±2.35AverageSuccess Rate 0.92±0.02 0 .91±0.02 0 .86±0.03 0.95±0.02 0.94±0.02Ave. Reward 389.31±7.77 390 .66±8.11 379 .28±8.4 394 .75±7.48 403.86±7.42Mahalanobis distance. The Mahalanobis distance measures the distance between a point xand adistribution D. It is defined as:dM(x) =q(x−μ)TΣ−1(x−μ), (6)where xis the point, μis the mean of the distribution D, and Σis the covariance matrix of D.We compute Mahalanobis distance between the mean vector yofYand distribution X. Similarly,we test five different threshold dεand run three random seeds for each dε. The quantitative convergedperformance is shown in Table 6.Wasserstein distance. In Table 1 we analysed COVERS performance under 1-Wasserstein distancewith threshold of dε= 1.0. Here we test five different threshold dεand run three random seeds foreachdε. The converged performance is shown in Table 7.Analysis. Our results show that the Wasserstein distance is less sensitive to the hyperparameterand performs well across different parameters. Moreover, the L2 distance can yield satisfactory16Table 6: Quantitative results showing performances at convergence for different Mahalanobis dis-tance threshold dε, including the average performance over three runs as well as the confidenceinterval of 95%.Threshold dε 3.0 4.0 5.0 6.0 7.0Plate SlideSuccess Rate 0.94±0.04 0.96±0.03 0.95±0.03 0 .9±0.05 0 .31±0.07Ave. Reward 332.75±20.91 330 .25±20.56 329 .31±22.13333.61±19.43 138.03±22.72Button PressSuccess Rate 0.81±0.06 0 .83±0.06 0 .7±0.07 0.94±0.04 0.24±0.07Ave. Reward 323.0±3.83 321 .57±4.34 306 .89±7.62 336.77±0.76 301.48±6.24Drawer CloseSuccess Rate 0.82±0.06 0.85±0.06 0.78±0.07 0 .72±0.07 0 .62±0.08Ave. Reward 392.73±9.48420.41±8.77 399.06±10.31 390 .82±10.78 363 .15±11.7Goal ReachSuccess Rate 0.99±0.01 0 .96±0.03 0.99±0.01 0.99±0.01 0 .9±0.05Ave. Reward 487.85±0.27 486 .54±1.15488.54±0.16 486.05±1.46 484 .0±2.26AverageSuccess Rate 0.9±0.02 0.91±0.02 0.86±0.03 0 .89±0.02 0 .52±0.04Ave. Reward 384.09±7.84389.69±7.88 380.95±8.54 386 .81±7.44 321 .66±11.96Table 7: Quantitative results showing performances at convergence for different 1-Wasserstein dis-tance threshold dε, including the average performance over three runs as well as the confidenceinterval of 95%.Threshold dε 0.6 0.8 1.0 1.2 1.4Plate SlideSuccess Rate 0.49±0.1 0 .94±0.04 0 .95±0.04 0.95±0.04 0.89±0.06Ave. Reward 196.34±31.64 339 .21±26.44 340 .1±24.42352.65±24.72 336.82±24.02Button PressSuccess Rate 0.48±0.1 0 .83±0.07 0 .88±0.06 0.94±0.04 0.75±0.08Ave. Reward 272.01±13.25 316 .08±9.41 324 .98±4.54 326.22±5.81 298.83±13.23Drawer CloseSuccess Rate 0.97±0.03 0.89±0.06 0 .91±0.05 0 .88±0.06 0 .76±0.08Ave. Reward 433.62±7.79 410.59±13.92 411 .99±11.26 425 .68±8.38 393 .62±12.99Goal ReachSuccess Rate 0.98±0.02 0.98±0.02 0.95±0.04 0 .98±0.02 0 .98±0.02Ave. Reward 488.72±0.11487.93±0.16 486.9±0.96 487 .62±0.4 488 .05±0.36AverageSuccess Rate 0.74±0.04 0 .92±0.03 0 .94±0.02 0.95±0.02 0.86±0.03Ave. Reward 347.67±14.55 388 .45±10.23390.99±9.29 398.05±9.12 379 .33±10.31performance with the optimal hyperparameter selection. Such observations show that the invariantfeature is more important in group identification other than the metrics.C.2 The Effect of Buffer SizeWe conduct an ablation study to show the effect of the buffer sizes. We select five buffer sizes,including 32, 64, 128, 256, and 512. We show the results in Table 8 and Figure 13d. Our resultsshow that when buffer size equals 128, COVERS achieves the best performance.D Additional Training ResultsD.1 Evaluation over Different Levels of Camera PerturbationsIn this section, we present the converged performance of our algorithm under different camera per-turbation levels. For experiments with perturbation distance dp, we randomly shift the xycoordinateTable 8: Quantitative results showing performances at convergence for different buffer sizes, includ-ing the average performance over three runs as well as the confidence interval of 95%.Buffer size 32 64 128 256 512Plate SlideSuccess Rate 0.61±0.08 0 .94±0.04 0 .95±0.04 0.96±0.03 0.94±0.04Ave. Reward 240.42±25.11 310 .47±20.19 340 .1±24.42 331 .52±21.91347.38±20.26Button PressSuccess Rate 0.55±0.08 0.94±0.04 0.88±0.06 0 .49±0.08 0 .77±0.07Ave. Reward 269.27±14.27333.79±1.66 324.98±4.54 221 .73±23.32 311 .55±6.15Drawer CloseSuccess Rate 0.9±0.05 0 .73±0.07 0.91±0.05 0.86±0.05 0 .79±0.06Ave. Reward 380.41±13.72 378 .98±11.03411.99±11.26 398.73±8.54 394 .83±10.06Goal ReachSuccess Rate 0.96±0.03 0 .99±0.01 0 .95±0.04 0.99±0.01 0.99±0.01Ave. Reward 486.83±0.74 488 .62±0.1 486 .9±0.96 488.35±0.16 488.3±0.43AverageSuccess Rate 0.76±0.03 0 .9±0.02 0.94±0.02 0.83±0.03 0 .88±0.03Ave. Reward 344.23±11.17 377 .96±7.95 390.99±9.29 360.08±11.37 385 .52±7.917(a) (b) (c) (d)Figure 13: ARI value for analyzing the sensitivity of different metrics and the buffer size.Figure 14: Training curves for COVERS and other methods. Each background color corresponds toone task group. Each curve is averaged over 5 runs, and the shaded area shows variance. COVERSshows similar performance with COVERS-GT, which utilizes additional ground truth group indices,and substantially outperforms other baselines.by adding noise sampled from a uniform distribution of range (−dp, dp). The results are shown inTable 9. We choose five perturbation levels, including dp= 0.01,0.02,0.03,0.04,0.05in meters.Our results show that our trained policy can still achieve high performance even with large cameraposition shifts when dp= 0.05, indicating strong robustness to the camera perturbations. The taskthat suffers the most when the camera position shifts is the Button Press, which we conjecture maybe due to the contact-rich nature of the task. It is worth noting that even when the equivariancebetween tasks is imperfect due to camera position shift, our policy can still achieve success rateshigher than 0.5 in 3 out of 4 tasks and an average success rate of 0.59 when dp= 0.05.D.2 Additional Training SetupsTraining devices. We conducted COVERS training and ablation studies on a cluster of servers, withdifferent hardware configurations. The CPU model includes AMD RYZEN 9 3900X, AMD RYZEN9 3900X, AMD RYZEN 9 5900x, and AMD RYZEN 9 7900x. The GPU model includes NVIDIAGeForce RTX 2080 Ti, NVIDIA GeForce RTX 3090, and NVIDIA GeForce RTX 4090.Training time and memory consumption. Here, we show the total time to train different methods.Note that the absolute training time highly depends on server hardware, and our results are bestTable 9: Evaluation over Different Levels of Camera PerturbationsPerturbations level dp(m) 0.01 0.02 0.03 0.04 0.05Plate SlideSuccess Rate 0.92±0.05 0.86±0.07 0 .78±0.07 0 .8±0.07 0 .69±0.08Ave. Reward 299.06±23.95 283.54±22.36 261 .21±22.31 264 .79±22.13 252 .06±21.32Button PressSuccess Rate 0.45±0.09 0.17±0.07 0 .24±0.07 0 .14±0.06 0 .08±0.05Ave. Reward 254.15±18.14 143.74±17.59 162 .13±18.8 133 .01±16.47 144 .12±15.86Drawer CloseSuccess Rate 0.89±0.06 0.75±0.08 0 .67±0.08 0 .67±0.08 0 .6±0.09Ave. Reward 365.87±18.03 313.79±19.11 287 .09±20.57 270 .05±20.28 235 .91±21.67Goal ReachSuccess Rate 0.98±0.02 0 .98±0.02 0 .98±0.02 0.98±0.02 0.98±0.02Ave. Reward 488.06±0.7 488 .56±0.27 488 .61±0.23488.63±0.13 488.49±0.45AverageSuccess Rate 0.80±0.04 0.69±0.04 0 .66±0.04 0 .65±0.04 0 .59±0.04Ave. Reward 351.78±11.79 307.41±13.93 299 .76±13.86 289 .12±14.27 280 .14±14.2418understood when comparing them relative to each other. The averaged training time for COVERS,CNN, Equi, COVERS-GT, COVERS-CNN is roughly the same, about 34 hours, while CLEAR and3RL take 10 and 30 hours to train, respectively. For memory consumption, COVERS, CNN, Equi,COVERS-GT, COVERS-CNN are roughly the same, about 10 Gigabytes. For 3RL and CLEAR, thememory consumption is about 250 Gigabytes since they are off-policy algorithms and consist of alarge replay buffer that stores state-action pair. This could be problematic in our setup since we useimages as part of the state that dramatically increases memory consumption.D.3 Qualitative Visualization using Training RewardsSimilar to Figure 6 that shows the success rates along training, we provide qualitative visualizationusing the task rewards in Figure 14.19 |
VH6WIPF4Sj | Predicting Object Interactions with BehaviorPrimitives: An Application in Stowing TasksHaonan Chen, Yilong Niu∗ ∗, Kaiwen Hong∗, Shuijing Liu, Yixuan Wang,Yunzhu Li ,Katherine Driggs-CampbellUniversity of Illinois, Urbana-Champaign{haonan2, yilongn2, kaiwen2, sliu105, yixuan22, yunzhuli, krdc }@illinois.eduAbstract: Stowing, the task of placing objects in cluttered shelves or bins, is acommon task in warehouse and manufacturing operations. However, this task isstill predominantly carried out by human workers as stowing is challenging to au-tomate due to the complex multi-object interactions and long-horizon nature ofthe task. Previous works typically involve extensive data collection and costlyhuman labeling of semantic priors across diverse object categories. This paperpresents a method to learn a generalizable robot stowing policy from predictivemodel of object interactions and a single demonstration with behavior primitives.We propose a novel framework that utilizes Graph Neural Networks to predict ob-ject interactions within the parameter space of behavioral primitives. We furtheremploy primitive-augmented trajectory optimization to search the parameters of apredefined library of heterogeneous behavioral primitives to instantiate the controlaction. Our framework enables robots to proficiently execute long-horizon stow-ing tasks with a few keyframes (3-4) from a single demonstration. Despite beingsolely trained in a simulation, our framework demonstrates remarkable generaliza-tion capabilities. It efficiently adapts to a broad spectrum of real-world conditions,including various shelf widths, fluctuating quantities of objects, and objects withdiverse attributes such as sizes and shapes.Keywords: Robotic Manipulation, Model Learning, Graph-Based Neural Dy-namics, Multi-Object Interactions1 IntroductionStowing, defined as relocating an object from a table to a cluttered shelf, is one of the dominatingwarehouse activities. In stowing, an agent is required to pick up an object from a table. The agentmust then actively create free space within the shelf before inserting the object from the table. A suc-cessful stow execution is characterized by the placement of all objects with poses in some predefinedthresholds. While stowing can be performed effortlessly by humans, it remains challenging whenautomated with robots. The difficulty stems from the long-horizon nature, multi-object interactions,and the variety of objects and configurations involved in stowing tasks.The challenge of long-horizon stowing task is not only due to the nature and variety of objects in-volved but also due to several inherent constraints in existing methods. First, the nature of thesetasks requires determining the characteristics of contacts and frictions, which is a task that presentsconsiderable difficulties. Conventional first-order models fall short in capturing the physical effects,and identifying the specific parameters of contact and friction is challenging [1]. Thus, designing acontroller for such tasks becomes a tedious and laborious process. Second, the existing methodolo-gies, including manually pre-programmed systems and recent advancements in category-level objectmanipulation, exhibit notable limitations. Classical pre-programmed systems struggle with adapt-ability, unable to efficiently handle variations introduced by different arrangements of objects on the∗Equal contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.shelf. Meanwhile, recent strategies for category-level object manipulation are curbed by the need forexpensive data collection and human labelling, thus failing to provide a scalable solution [2]. Ad-ditionally, pure learning-based methods, such as Deep Reinforcement Learning (DRL), also presentdrawbacks in terms of extensive training time and poor data efficiency [3, 4], making learning fromscratch on real robots impractical for long-horizon tasks.To address these challenges, we propose a framework that uses Graph Neural Networks (GNNs) topredict object interactions within the parameter space of behavior primitives. When trained withvarious situations in the simulator, GNNs can learn to model the forward dynamics associated withinteractions of rigid objects. Instead of explicitly determining contacts and frictions, our GNNframework is designed to learn these underlying interactions during the training process. Thus, weeliminate the need for explicit detection and intricate calculations related to contacts and forces.Our framework also applies primitive-augmented trajectory optimization to search the parametersof a predefined library of heterogeneous skills or behavior primitives. The incorporation of behaviorprimitives enables our policy to handle tasks of significant complexity with improved efficiency.PushTransportFigure 1: The robot places an object into a clutteredshelf by exploiting object interactions. It uses thegrasped object to manipulate other objects within theshelf through pushing and sliding actions and finallyplaces all the objects in the appropriate location.We make three key contributions: (1) We in-troduce a novel model-based imitation learningframework to learn from minimal demonstra-tions, which enables robots to acquire complexskills. (2) We create a comprehensive stow-ing benchmark for long-horizon manipulations,which is highly prevalent in various both theindustrial and household applications. (3) Wedemonstrate the effectiveness and generaliza-tion of our framework across a wide range ofreal-world experiments in stowing tasks.2 Related WorksOne-shot Imitation Learning in Manipulation: Recent advancements in imitation learning aim tosolve unseen tasks with minimal demonstrations [5, 6, 7, 8, 9]. Typical approaches use a two-phasepipeline: a meta-learning phase to train a general policy from numerous tasks and a fine-tuningphase to optimize the policy for a specific task using a single demonstration [9]. Certain replay-based methods employ a strategy of estimating a ‘bottleneck pose’, splitting the trajectories intotwo segments, and then replaying the demonstration trajectory from the bottleneck [6]. Other tech-niques emphasize learning object representations and identifying correspondence between specificobjects [5, 10] or objects within the same category [8]. However, these methods primarily handlerelatively short-horizon tasks and face difficulties in modeling object dynamics.Model Learning in Robotic Manipulation: Dynamic models have emerged as crucial com-ponents in robotic systems, with the capability to predict future world states based on currentstates and actions [11, 12, 13, 14, 15, 16]. These models can handle a variety of input repre-sentations, ranging from images and latent vectors to point sets and graph representations. No-tably, graph representations have shown an exceptional ability to capture interactions among ob-jects [17, 18, 19, 20, 21, 22, 23]. The ability to model interactions between various objects andthe robot’s end effector is pivotal for our research, leading us to use a graph to model interactionsbetween objects in our system. Tekden et al. [24, 25] introduce a graph-based modeling approachthat places emphasis on object-level representation, leveraging GNNs to capture interactions amongmultiple objects. Similarly, RD-GNN [26] uses an MLP for action encoding and treats each object asa unique node in the graph, concentrating on inter-object relations. While both approaches providebroad perspectives on object interactions, our framework diverges by representing robot movementsas point clouds and objects with multiple particles. This approach offers a more granular under-standing of actions and interactions, enhancing the accuracy of object movement predictions.2...Sampledactions(b) Policy Execution GNNGNN(a) Dynamics TrainingState StateMSEGNNActionFigure 2: Overview of the proposed framework. (a) A particle-based representation characterizes the objectstate. The object state’s predicted outcome following the executed robot actions is computed alongside theground truth object state using the MSE loss function to train the GNN. (b) For each skill, we apply randomshooting to sample parameters within the action parameter space, utilizing the GNN to predict object movement.We then select the action that brings us closest to the desired state. Each skill is executed in sequence.Long-Horizon Planning in Robotic Manipulation: Addressing long-horizon manipulation prob-lems presents considerable complexity. Hierarchical reinforcement learning (HRL) approaches ad-dress this issue by using a high-level policy to select optimal subtasks and low-level policies toexecute them [27]. However, these methods face the challenges of the sim2real gap and the diffi-culty of real-world data collection, hampering their real-world transferability [28]. An alternativeapproach by Di Palo et al. [29] augments replay-based methodologies by integrating primitive skillsfor routine long-horizon tasks, though their task configurations lack versatility. Integrated task andmotion planning (ITMP) combines discrete and continuous planning[30], blending high-level sym-bolic with low-level geometric reasoning for extended robotic action reasoning. Recent efforts byLin et al. [31] have explored sequential manipulation of deformable objects through a differentialsimulator and trajectory optimization. However, this work is only validated in simulation, and real-world deployment is non-trivial due to the difficulty of obtaining gradients of state changes. Toaddress this, we propose to apply trajectory optimization to GNN-based forward dynamics predic-tion modules, incorporating heterogeneous behavior primitives.3 ApproachIn our proposed system, a GNN first predicts system dynamics. Then, a primitive-augmented tra-jectory optimization method achieves subgoals from a single demonstration, which is shown inFigure 2. Initially, the object state is represented as particles. We then train the GNN with the MSEloss between the predicted outcome following robot actions and the ground truth state. We use therandom shooting to explore the action parameter space and use the GNN’s predicted results to se-lect the optimal action that aligns most closely with our desired state. The skills are executed in asequential manner, guiding our system to accomplish its tasks efficiently.3.1 Learning Forward Dynamics via GNNGraph Construction: We define a dynamics function that describes a rigid-body system state SwithMobjects and Nparticles. We model each rigid object as a collection of uniformly distributedparticles, offering a representation that is flexible and adaptable to variations in object shape, size,and geometry. The dynamics function is expressed as Φ :S × A → T .Arepresents the skill typeand its associated parameters, and Tdenotes Mrigid transformations containing the translation androtation for each dynamic object. The future state of an object can be determined by applying asequence of these rigid transformations. We represent each rigid objects as uniformly distributedparticles as such representations are versatile to object shape, object size, and object geometry.Each object in the system is represented by its respective particles. We define the graph state st=(Ot,Et), where the graph’s vertices Otrepresent an object’s particles, and edges Etdenotes therelations between particles. Each vertex oi,t=⟨xi,t, ci,t⟩consists of the particle’s position xi,t3and object’s attributes ci,tincluding its dynamism (dynamic or static), the object it belongs to, itsoffset from the rigid body’s center, and gravity. Edges Etare formed when the distance betweentwo vertices is less than a specified threshold. In our work, the relations are characterized by thephysical interactions and geometric proximity between the particles signifying objects. Specifically,we introduce the following relations to adeptly encapsulate the complex dynamics in multi-objectinteractions: (1) Intra-object relations : Between different particles within the same object or acrossdifferent objects. (2) Gripper-to-object relations : Between particles from the objects and therobot’s gripper. The edge relations are represented as ek=⟨ik, jk, ck⟩, in which ik, jkdenotethe indices of the receiver and sender particles, respectively. The edge index is denoted by k, andnature of the relationship such as intra-object relation, or gripper-to-object relation) is denoted by ck.Since our focus is predicting the motions of dynamic objects, we restrict node building to verticesassociated with these dynamic objects. However, to effectively model the interactions betweendynamic and static objects (i.e, shelf and table), we choose to incorporate the particles of staticobjects during the construction of edges.Message Passing: The features of all vertices and edges are processed through encoder multi-layer perceptrons (MLPs), resulting in latent vertex representations denoted as hOiand latent edgerepresentations represented as hEi,j. At each time step, we apply the following message function tothe vertex feature and edge feature for several iterations to handle signal propagation.hEi,j←ρE(hEi,j, hOi, hOj), hOi←ρO(hOi,XjhEi,j). (1)where the message passing functions ρEandρOare MLPs. Subsequently, we apply an MLP decoderto the vertex feature processor’s final layer output. The rigid transformation for each individualobject is determined by calculating the mean of the decoder outputs corresponding to that object.Representing Action as Particles: The control action, represented by the gripper’s planned posi-tion and motion, is defined by particles oi,t=⟨xi,t, vi,t, ci,t⟩, where xi,tdenotes the current positionof gripper, vi,tdenotes the planned motion of the gripper. The particles associated with the gripperare subsequently encoded by the encoder. Additionally, we predict the future positions of the grip-per. The discrepancy between these predicted positions and the actual achieved positions serves asan auxiliary loss, which helps GNNs better understand the inherent physical laws and constraints.3.2 Control with the Learned DynamicsIn this section, we discuss the design of behavior primitives, and trajectory optimization algorithmsused to generate parameters for different skills.Behavior Primitives: We introduce the behavior primitives as a higher level of temporal abstrac-tion to facilitate efficient planning. The behavior primitives simplify the task space by generatingkey poses for the system, which subsequently executes actions using Operational Space Control(OSC) [32] as a lower-level controller. The GNN is used only to predict the system’s state at thekey poses of behavior primitives, which significantly reduces the number of forward passes and cu-mulative prediction error over time. Behavior primitives function serve as building blocks and canbe easily extended for various manipulation tasks. We further specify a maximum execution timeTsklfor each behavior primitive. Our system integrates a collection of three primitives, encompass-ing both prehensile and non-prehensile motions. The primitives and their associated parameters aredescribed as follows: (1) Sweeping: The robot sweeps objects on the shelf using its end-effector,aiming to stand them upright. Sweeping is parameterized by the starting offset yin the shelf direc-tion, sweeping height h, sliding distance d, and the angle of gripper rotation θduring the sweep.(2)Pushing: Pushing involves the robot nudging the object to establish potential grasp poses. Thestarting push position (x, y)and the distance of the push dare the parameters for pushing. (3)Transporting: The robot picks up the object from the table, places it in the shelf, and, if necessary,adjusts its position through sliding. This skill is defined by parameters such as the starting offset inthe shelf direction y, the height of insertion h, the sliding distance d, and the gripper rotation angleθduring the insertion process.4Goal-conditioned Trajectory Optimization: We optimize the parameters of a given skill by min-imizing the mean square error (MSE) loss between the predicted particle positions and their corre-sponding positions as demonstrated. Keyframes collected during demonstrations can be denoted byg. We search for the skill parameter apthat minimizes the cost function Jrepresenting the MSEbetween the predicted and target positions of the object particles. Mathematically, this optimizationproblem is represented as ap= arg min apJ(sT, g). The resulting low-level control actions aregenerated from the the skill which is parameterized by the skill parameters ap. Our dynamics net-work Φmakes forward predictions of the future state of the system after the execution of each skill.We employ trajectory optimization to find the skill parameters that yield the lowest cost.4 Experiment SetupOur experimental setup consists of both simulated and real-world environments. The simulatedenvironment is built using Robosuite [33] and the MuJoCo physics engine [34], operating with a7-DOF Kinova Gen3 Robot. The real-world counterpart consists the same Kinova Gen3 Robot anda Robotiq 2F-85 gripper.CameraTarget object forgrasping and insertingShelf filled withvarious objects(a) Robot setup(b) Objects used in this workFigure 3: The experimental setup.Data collection: We collect a dataset of 300 episodes of actionexecution for each behavior primitive, where the actions are ex-ecuted based on randomly sampled skill parameters. For eachepisode, the robot randomly sampled parameters within its pa-rameter space and executed the skill in the simulator. We gatherthe key poses for each skill, subsequently training a GNN to takethe state at these key poses and the robot action and predict thefuture state at the subsequent key poses.Simulated environment: In our simulation, the shelf width isinitialized to randomly vary within the range from 0.18 to 0.35meters. We also randomly select the number of objects placedon the shelf, varying between two and four. The properties ofeach object, including size and density, were randomly gener-ated. All the objects are created with a box shape. We use aSpaceMouse to teleoperate the robot to complete the task, andthe ending poses of each skill were collected. These poses arethen used as subgoals for each skill during execution.Real-world environment: Figure 3 illustrates our real-world environment setup. We use OAK-DPro cameras, with a top-down camera to estimate the pose of the object for grasping and inserting,and a side-view camera to estimate the object’s pose in the shelf. Objects placed on the table arealways oriented perpendicular to the table edge and positioned adjacent to it. Shelf sizes of 0.18mand 0.35m are tested, and objects in the shelf are placed at randomized positions and orientations. Apoint cloud representation of each object is created, including their sizes, positions, and orientationsas the state representation.Evaluation metrics: We evaluated our dynamics model and manipulation outcomes in simulationbased on the final prediction error, applying metrics such as Mean Squared Error (MSE), EarthMover’s Distance (EMD) [35], and Chamfer distance (CD) [36]. In real-world scenarios, successrates were computed for each setup. A success was defined as all boxes ending up within the shelfwith their orientations falling within a predefined threshold θ.5 Experimental ResultsIn this section, we evaluate our forward dynamics prediction model in both simulated and real-worldsettings. We use a diverse range of objects and shelf dimensions to provide a broad and challengingtest. The results highlight our framework’s potential for zero-shot sim2real transfer, demonstratingits applicability in handling real-world conditions without the necessity for prior real-world data.5Table 1: Dynamics Model Prediction Quantitative Results and Ablations. Our model consistently outper-forms RoboCraft in complex primitives such as ‘Sweep’ and ‘Transport’, evident in the lower MSE, EMD,and CD values. The Object-Level representation underperforms due to its lack of dense object information. Inthe simpler ‘Push’ primitive, our model retains comparable MSE, EMD, and CD values within one standarddeviation, indicating robust performance even in simpler scenarios.Primitive Method MSE (mm)↓ EMD (mm)↓ CD(mm)↓SweepObject-Level Repr 3.676 ( ±1.597) 75.015 ( ±14.452) 92.914 ( ±17.206)RoboCraft [14] 0.351 ( ±0.222) 24.510 ( ±6.434) 33.277 ( ±5.064)Ours 0.287 (±0.185 )21.792 (±5.533 ) 30.017 (±3.259 )PushObject-Level Repr 3.765 ( ±0.76) 75.975 ( ±10.927) 95.925 ( ±14.802)RoboCraft 0.216 (±0.148 )12.509 (±3.494 ) 15.43 ( ±2.902 )Ours 0.292 ( ±0.179) 14.569 ( ±3.752) 15.046 (±3.085)TransportObject-Level Repr 5.861 ( ±3.106) 91.913 ( ±20.552) 113.263 ( ±23.168)RoboCraft 1.091 ( ±0.512) 42.162 ( ±10.317) 55.615 ( ±9.997)Ours 0.666 (±0.41)31.232 (±9.108 ) 38.068 (±6.605 )Table 2: Quantitative Results of Control in Simulation. Our method consistently outperforms SAC, PPO,parameterized PPO, and heuristic-driven control, exhibiting markedly lower execution errors, illustrating thecritical role of incorporating a model for long-horizon stowing tasks.Method MSE (mm)↓EMD (mm)↓CD(mm)↓SAC 265.411 465.639 368.571PPO 87.479 266.554 173.522Parameterized PPO 22.925 120.736 62.182Heuristic 34.861 140.196 194.123Ours 0.905 29.697 39.9145.1 Dynamics Model Learning and AblationsWe trained our dynamics model using MSE loss. As part of our ablation study, we incorporateda version of the GNN presented in [14], which we will refer to as “RoboCraft”. It’s important tonote that this model does not encompass dynamic-static object interactions, nor does it utilize theauxiliary loss derived from gripper movements. Additionally, we introduced a baseline, “Object-Level Repr”, using object-level representation by using a single GNN node to symbolize objects.The quantitative results from the model learning are shown in Table 1. The relatively small predic-tion errors suggest that our model is able to accurately predict the interactions involved in the task;these include object/object interactions, environment/object interactions, and robot gripper/objectinteractions. The results further demonstrate that the model can effectively learn the rigid motionsof the objects, resulting in only minor errors.Compared to RoboCraft, the introduction of gripper movement information and dynamic-static ob-ject edge information improves the prediction accuracy of our GNN model, particularly in complexbehavior primitives such as sweeping and transporting. Even in simpler behavior primitives likepushing, our GNN maintains a performance comparable to the RoboCraft, with the prediction er-ror remaining within one standard deviation, highlighting its consistent effectiveness. While the lessspecialized RoboCraft performs adequately in straightforward pushing skill, it encounters difficultiesin more dynamic situations. In contrast, our model’s advanced complexity and adaptability proveto be particularly advantageous in scenarios characterized by intricate dynamics, such as collisionsand bounces between objects or with the environment.5.2 Manipulation ResultsResults in Simulation: We collect six demonstrations in the simulation. The keyframes fromthese demonstrations serve as the goal state for trajectory optimization, implemented with RandomShooting (RS). In our analysis, we evaluated Random Shooting, the Cross-Entropy Method, andGradient Descent - all of which exhibited similar performance. We present the results obtained fromRS and denote them as “Ours” in the subsequent discussion. We use Proximal Policy Optimization(PPO) [37] and Soft Actor-Critic (SAC) [38], both state-of-the-art, model-free RL algorithms, asbaselines. The choice of these algorithms aims to highlight the necessity of a model for long-6(a) Nominal setup(b) With tiny object(c) With heavy and trapezoid objects(d) With bottle to be grasped(e) With slippery roller(f) Small shelf setupFigure 4: Different Setups Used in Real-World Manipulation Experiments. Six setups represent a widerange of conditions including different combinations of objects, shelf sizes, object dimensions, and shapes.horizon stowing tasks. The simulation results are presented in Table 2. Each method’s performanceis assessed using MSE, EMD, and CD as metrics. PPO and SAC utilize the negative MSE betweenthe current and goal object states as the reward function. In comparison to these methods, RSconsistently yields lower execution errors across all metrics, which indicates its superior capabilityin minimizing the gap between actual and desired states. Model-free RL methods such as PPOand SAC struggle to perform effectively due to the large exploration space and the long-horizonnature of the task. Their effectiveness is further hampered by their lack of knowledge regarding themodel of the environment. We also benchmark against a version of PPO with a parameterized actionspace based on behavior primitives and a heuristic-driven approach devoid of learned dynamics. Ourmethod outperforms both, demonstrating a considerable performance advantage.Results in the Real World: Our framework is tested in six different real-world setups, with eachsetup executed for ten test trails. We manually randomize the initial orientations and positions of theobjects within the shelf in each trail. The object poses in these scenarios are identified using Scale-Invariant Feature Transform (SIFT) on the captured images. The various setups represent a broadspectrum of conditions including different object combinations, shelf sizes, object dimensions, andshapes. This wide range of conditions is depicted in Figure 4.Table 3: Real-World Success Rates. Performance evaluation insix different setups using: our proposed method with two distinctskill sets (2 skills and 3 skills), RoboCraft as learned dynamics,and a heuristic-based approach without dynamics prediction.Success ↑Setups Heuristic RoboCraft 3 skills 2 skills(a) 1/10 4/10 10/10 10/10(b) 3/10 7/10 9/10 9/10(c) 3/10 4/10 9/10 9/10(d) 1/10 6/10 10/10 10/10(e) 2/10 7/10 10/10 10/10(f) 1/10 5/10 9/10 9/10Average 18% 48% 95% 95%In our experiments, we implementtwo distinct skill combinations foreach setup: a 3-skill set compris-ing sweeping, pushing, and transport-ing, and a 2-skill set, which only in-cluded pushing and transporting. Theterm “heuristic” refers to a processwhere humans fine-tune a relativelysmall parameter space and assign thetuned parameters to the skills. Theheuristic-based approach is likewiseconducted utilizing all three skills:sweeping, pushing, and transporting.Figure 4 presents the success rates of the different control strategies. Our 3-skill method significantlyoutperforms the heuristic-based approach with a success rate of 95%, indicating the effective han-dling of varied setups by our dynamics prediction module. Interestingly, the 2-skill set also achievesthe same 95% success rate, indicating that the robot’s ability to understand the interactions betweenthe gripper-held object and the objects within the shelf enables it to determine the optimal positionfor insertion and placement. These high success rates demonstrate the effectiveness of our method.7tHeuristicOurs(a) SweepingHeuristicOursTarget(b)TransportingSlidoutSlidoutFigure 5: Comparison of the heuristic-based method and our approach during the execution of the sweep-ing skill and transporting skill. Our method anticipates future states and arranges objects into upright posi-tions within the shelf, unlike the heuristic-based method which pushes objects out of the shelf.In contrast, the heuristic-based strategy yielded an average success rate of only 18%. Despite be-ing trained solely with box-shaped objects, our method generalizes effectively to out-of-distributionobjects, showing its versatility in a variety of real-world conditions.Figure 5 provides a qualitative comparison of the sweeping skill execution between the heuristic-based method and our approach. Our method, equipped with the ability to anticipate future statesbased on specific robot actions, is capable of sweeping and transporting objects into upright positionswithin the shelf. In contrast, the heuristic-based method tends to push objects out of the shelf.6 ConclusionIn this work, we focus on stowing tasks wherein a robot must manipulate a large, ungraspable flatobject and subsequently stow it within a cluttered shelf. The robot’s task involves creating sufficientspace within the cluttered shelf to accommodate the object appropriately. We introduce a systemthat utilizes behavior primitives in combination with forward dynamics prediction to achieve thetask. We discuss the design choices for our system and demonstrate its effectiveness in real robotscenarios. Moreover, we show the system’s ability to generalize to various stowing conditions.Our work opens several potential avenues for future research. One promising direction involvesdeveloping the ability to composite skills and further reduce the sub-goals presented in the demon-strations. Another is that the design and definition of the behavior primitives library need additionalexploration and research, which can enhance the adaptability and versatility of robotics systems inperforming complex manipulation tasks.Limitations : Our system currently has a few limitations. Firstly, it relies on manual human la-beling of ordered keyframes from demonstrations, which could potentially restrict scalability anddeployment in larger and more complex scenarios. Secondly, we use box-shaped point clouds torepresent objects during training and inference. This simplistic representation may not accuratelyreflect the geometrical properties of objects, especially in scenarios involving more complex inter-actions and contacts. Addressing these limitations, particularly improving object representation,presents a promising direction for future research.8AcknowledgmentsWe thank Haochen Shi’s tireless assistance with the GNN implementation, as well as NeeloyChakraborty, Peter Du, Pulkit Katdare, Ye-Ji Mun, and Zhe Huang for their insightful feedbackand suggestions.This work was supported by ZJU-UIUC Joint Research Center Project No. DREMES 202003,funded by Zhejiang University.References[1] F. R. Hogan and A. Rodriguez. Feedback control of the pusher-slider system: A story of hybridand underactuated contact dynamics. In Workshop on the Algorithmic Foundations of Robotics ,2016.[2] L. Manuelli, W. Gao, P. R. Florence, and R. Tedrake. kpam: Keypoint affordances for category-level robotic manipulation. In International Symposium of Robotics Research , 2019. URLhttps://api.semanticscholar.org/CorpusID:80628296 .[3] J. Luo, O. Sushkov, R. Pevceviciute, W. Lian, C. Su, M. Vecerik, N. Ye, S. Schaal, andJ. Scholz. Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learningand Demonstrations: A Large-Scale Study. In Proceedings of Robotics: Science and Systems ,Virtual, July 2021. doi:10.15607/RSS.2021.XVII.088.[4] T. Zhao, J. Luo, O. O. Sushkov, R. Pevceviciute, N. M. O. Heess, J. Scholz, S. Schaal, andS. Levine. Offline meta-reinforcement learning for industrial insertion. 2022 InternationalConference on Robotics and Automation (ICRA) , pages 6386–6393, 2022.[5] C.-Y . Chai, K.-F. Hsu, and S.-L. Tsao. Multi-step pick-and-place tasks using object-centricdense correspondences. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 4004–4011. IEEE, 2019.[6] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.InIEEE International Conference on Robotics and Automation (ICRA) , pages 4613–4619.IEEE, 2021.[7] S. Nair, A. Rajeswaran, V . Kumar, C. Finn, and A. Gupta. R3m: A universal visual represen-tation for robot manipulation. In Conference on Robot Learning (CoRL) , 2022.[8] B. Wen, W. Lian, K. Bekris, and S. Schaal. You only demonstrate once: Category-level ma-nipulation from single visual demonstration. In Robotics: Science and Systems (RSS) , 2022.[9] Y . Duan, M. Andrychowicz, B. Stadie, O. Jonathan Ho, J. Schneider, I. Sutskever,P. Abbeel, and W. Zaremba. One-shot imitation learning. In I. Guyon, U. V . Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems , volume 30. Curran Associates, Inc.,2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ba3866600c3540f67c1e9575e213be0a-Paper.pdf .[10] A. Ganapathi, P. Sundaresan, B. Thananjeyan, A. Balakrishna, D. Seita, J. Grannen, M. Hwang,R. Hoque, J. E. Gonzalez, N. Jamali, et al. Learning dense visual correspondences in simulationto smooth and fold real fabrics. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 11515–11522. IEEE, 2021.[11] H. T. Suh and R. Tedrake. The surprising effectiveness of linear models for visual foresightin object pile manipulation. In Algorithmic Foundations of Robotics XIV: Proceedings of theFourteenth Workshop on the Algorithmic Foundations of Robotics 14 , pages 347–363. Springer,2021.9[12] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto. Learning predictive representations fordeformable objects using contrastive estimation. In Conference on Robot Learning , pages564–574. PMLR, 2021.[13] P. Mitrano, D. McConachie, and D. Berenson. Learning where to trust unreliable models inan unstructured world for deformable object manipulation. Science Robotics , 6(54):eabd8170,2021.[14] H. Shi, H. Xu, Z. Huang, Y . Li, and J. Wu. Robocraft: Learning to see, simulate, and shapeelasto-plastic objects with graph networks. In Robotics: Science and Systems (RSS) , 2022.[15] X. Lin, Y . Wang, Z. Huang, and D. Held. Learning visible connectivity dynamics for clothsmoothing. In Conference on Robot Learning , pages 256–266. PMLR, 2022.[16] Z. Huang, X. Lin, and D. Held. Mesh-based dynamics with occlusion reasoning for clothmanipulation. In Robotics: Science and Systems (RSS) , 2022.[17] Y . Li, J. Wu, R. Tedrake, J. B. Tenenbaum, and A. Torralba. Learning particle dynamics formanipulating rigid bodies, deformable objects, and fluids. In 7th International Conferenceon Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 . OpenRe-view.net, 2019. URL https://openreview.net/forum?id=rJgbSn09Ym .[18] Y . Li, J. Wu, J.-Y . Zhu, J. B. Tenenbaum, A. Torralba, and R. Tedrake. Propagation networks formodel-based control under partial observation. In 2019 International Conference on Roboticsand Automation (ICRA) , pages 1205–1211. IEEE, 2019.[19] P. Battaglia, R. Pascanu, M. Lai, D. Jimenez Rezende, et al. Interaction networks for learningabout objects, relations and physics. Advances in neural information processing systems , 29,2016.[20] M. Chang, T. D. Ullman, A. Torralba, and J. B. Tenenbaum. A compositional object-basedapproach to learning physical dynamics. In 5th International Conference on Learning Rep-resentations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings .OpenReview.net, 2017. URL https://openreview.net/forum?id=Bkab5dqxe .[21] A. Sanchez-Gonzalez, N. Heess, J. T. Springenberg, J. Merel, M. Riedmiller, R. Hadsell, andP. Battaglia. Graph networks as learnable physics engines for inference and control. In Inter-national Conference on Machine Learning , pages 4470–4479. PMLR, 2018.[22] N. Funk, G. Chalvatzaki, B. Belousov, and J. Peters. Learn2assemble with structured repre-sentations and search for robotic architectural construction. In Conference on Robot Learning ,pages 1401–1411. PMLR, 2022.[23] T. Silver, R. Chitnis, A. Curtis, J. B. Tenenbaum, T. Lozano-P ́erez, and L. P. Kaelbling. Plan-ning with learned object importance in large problem instances using graph neural networks. InProceedings of the AAAI conference on artificial intelligence , volume 35, pages 11962–11971,2021.[24] A. E. Tekden, A. Erdem, E. Erdem, M. Imre, M. Y . Seker, and E. Ugur. Belief regulated dualpropagation nets for learning action effects on groups of articulated objects. 2020 IEEE In-ternational Conference on Robotics and Automation (ICRA) , pages 10556–10562, 2019. URLhttps://api.semanticscholar.org/CorpusID:216450474 .[25] A. E. Tekden, A. Erdem, E. Erdem, T. Asfour, and E. Ugur. Object and relation cen-tric representations for push effect prediction. ArXiv , abs/2102.02100, 2021. URL https://api.semanticscholar.org/CorpusID:231786514 .10[26] Y . Huang, A. Conkey, and T. Hermans. Planning for multi-object manipulation with graphneural network relational classifiers. 2023 IEEE International Conference on Robotics andAutomation (ICRA) , pages 1822–1829, 2022. URL https://api.semanticscholar.org/CorpusID:252531588 .[27] A. Gupta, V . Kumar, C. Lynch, S. Levine, and K. Hausman. Relay policy learning: Solvinglong-horizon tasks via imitation and reinforcement learning. In L. P. Kaelbling, D. Kragic, andK. Sugiura, editors, 3rd Annual Conference on Robot Learning, CoRL 2019, Osaka, Japan,October 30 - November 1, 2019, Proceedings , volume 100 of Proceedings of Machine Learn-ing Research , pages 1025–1037. PMLR, 2019. URL http://proceedings.mlr.press/v100/gupta20a.html .[28] A. Kadian, J. Truong, A. Gokaslan, A. Clegg, E. Wijmans, S. Lee, M. Savva, S. Cher-nova, and D. Batra. Sim2real predictivity: Does evaluation in simulation predict real-world performance? IEEE Robotics and Automation Letters , 5(4):6670–6677, 2020. doi:10.1109/LRA.2020.3013848.[29] N. Di Palo and E. Johns. Learning multi-stage tasks with one demonstration via self-replay. InConference on Robot Learning , pages 1180–1189. PMLR, 2022.[30] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual Review of Control, Robotics, and AutonomousSystems , 4(1):265–293, 2021. doi:10.1146/annurev-control-091420-084139. URL https://doi.org/10.1146/annurev-control-091420-084139 .[31] X. Lin, Z. Huang, Y . Li, J. B. Tenenbaum, D. Held, and C. Gan. Diffskill: Skill abstractionfrom differentiable physics for deformable object manipulations with tools. In The Tenth In-ternational Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29,2022 . OpenReview.net, 2022. URL https://openreview.net/forum?id=Kef8cKdHWpP .[32] O. Khatib. A unified approach for motion and force control of robot manipulators: The op-erational space formulation. IEEE Journal on Robotics and Automation , 3(1):43–53, 1987.doi:10.1109/JRA.1987.1087068.[33] Y . Zhu, J. Wong, A. Mandlekar, R. Mart ́ın-Mart ́ın, A. Joshi, S. Nasiriany, and Y . Zhu. robo-suite: A modular simulation framework and benchmark for robot learning. In arXiv preprintarXiv:2009.12293 , 2020.[34] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pages 5026–5033, 2012. doi:10.1109/IROS.2012.6386109.[35] Y . Rubner, C. Tomasi, and L. Guibas. A metric for distributions with applications to imagedatabases. In Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271) ,pages 59–66, 1998. doi:10.1109/ICCV .1998.710701.[36] H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, and H. C. Wolf. Parametric correspondenceand chamfer matching: Two new techniques for image matching. In Proceedings of the 5thInternational Joint Conference on Artificial Intelligence - Volume 2 , IJCAI’77, page 659–663,San Francisco, CA, USA, 1977. Morgan Kaufmann Publishers Inc.[37] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms, 2017.[38] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum en-tropy deep reinforcement learning with a stochastic actor. In J. G. Dy and A. Krause, ed-itors, Proceedings of the 35th International Conference on Machine Learning, ICML 2018,Stockholmsm ̈assan, Stockholm, Sweden, July 10-15, 2018 , volume 80 of Proceedings of Ma-chine Learning Research , pages 1856–1865. PMLR, 2018. URL http://proceedings.mlr.press/v80/haarnoja18b.html .11AppendicesA Additional ResultsA.1 Generalization to Deformable Objects in Real-World ExperimentsIn an effort to explore the adaptability of our method, we further extended our experiments to con-sider deformable objects. Intriguingly, even though our methodology was trained predominantlyusing box-shaped rigid objects, it showcased commendable generalization capabilities. Neverthe-less, as expected, the intricacies of deformable objects’ dynamics, being distinct from rigid ones,posed challenges, resulting in somewhat diminished performance compared to other test cases.Figure 6: Experimental setup with deformable objects.Success Rate ↑Setups Heuristic RoboCraft 3 skills 2 skillsDeformable Objects 0/10 2/10 6/10 4/10Table 4: Success rates when dealing with deformable objects.A.2 Impact of Sample Size on Prediction PerformanceWe examined the influence of training sample size on dynamics prediction performance. By varyingthe number of training samples and maintaining a consistent validation and test set, we observed therelationship between sample size and prediction accuracy in Figure 7. Specifically:• For the push behavior primitive, performance improvements plateaued around 50 samples.• For the sweep behavior primitive, a similar plateau was observed at 50 samples.• The transport behavior primitive exhibited steady performance gains until approximately140 samples.B Implementation DetailsB.1 Determining the Edge Formation Threshold in GNNThe optimal threshold for forming an edge in our graph representation was empirically derived tostrike a balance between computational complexity and representational expressiveness. Just aswith other hyperparameters essential for training our deep neural networks, we determined thresh-olds through extensive experimentation, primarily guided by validation loss. Depending on the spe-cific behavior being modeled, we adopted the following thresholds to ensure the edges encapsulatemeaningful interactions:Transport:• Intra-object relations: 0.15m• Gripper-to-object relations: 0.175m1220 40 60 80 100 1200.10.20.30.40.50.60.7Mean Squared Error (mm)Push50 100 150 200100102030Sweep100 120 140 160 180024681012Transport20 40 60 80 100 12010121416182022Earth Mover's Distance (mm)Push50 100 150 20050100150200Sweep100 120 140 160 18020406080100120Transport20 40 60 80 100 120Number of Samples12.515.017.520.022.525.0Chamfer Distance (mm)Push50 100 150 200Number of Samples050100150200250Sweep100 120 140 160 180Number of Samples406080100120TransportPush Sweep TransportFigure 7: Variation in prediction performance as a function of training sample size.Push:• Intra-object relations: 0.1m• Gripper-to-object relations: 0.1mSweep:• Intra-object relations: 0.175m• Gripper-to-object relations: 0.175mWhen the contact distance is set excessively large, it leads to the inclusion of a significant number ofedges that may not be relevant to the scene, which makes the model computationally expensive andintroduces noise which can hinder both the training process and the quality of inference. A thresholdthat is too small may miss vital interaction relations. We tuned these thresholds to effectively capturethe nuances of object interactions to optimize the prediction accuracy.B.2 Real Robot ExperimentPerception: Our perception pipeline uses two OAK-D Pro cameras to estimate the 6D (x, y, z, roll,pitch, yaw) pose of objects. The side camera, oriented towards the shelf, determines the poses ofthe objects on the shelf. In contrast, the top camera determines the pose of the object situated on thetable. We maintain a database encompassing comprehensive details of all objects in the experiment,including their size and texture. We approximate all objects as cuboids, defined by three parameters:length, width, and height. Side and top view images of these objects are captured to form a collectionof ground truth images. These images aid the identification of objects via the SIFT (Scale InvariantFeature Transform) feature matching algorithm. This algorithm is executed for each object in the13scene to match keypoints, which are subsequently used to calculate the homography matrix. Wethen use this matrix to calculate the 2D coordinates and 1D orientation for each object in the imagespace. Given the setup of our experiment, it suffices to estimate 3 degrees of freedom (DOF) for allobjects: (y, z, roll) for objects on the shelf and (x, y, yaw) for the object on the table. The other 3DOF is deterministic based on the known shelf location and object size. Incorporating the 3D poseof each object in the image space with our prior knowledge of the experiment setup, we can obtainthe 6D pose for all the objects in the scene for robot manipulation.(a) SIFT featuring matching with top-downview camera(b) SIFT feature matching with side viewcamera(c) SIFT feature matching with side viewcameraFigure 8: SIFT results with setup-(f). The ground truth images collected before the experiment are shown inthe upper-left corner of each subfigure. The green lines show the keypoints correspondence.B.3 More information on the objectsTable 5: Object DimensionsObjectLength(m)Width(m)Height(m)ImageSugar box 0.089 0.040 0.177Lint roller 0.110 0.050 0.230Clorox bottle 0.065 0.255 0.055Cheez-it box 0.152 0.047 0.190Charger box 0.168 0.051 0.240Continued on next page14Table 5 – Continued from previous pageObjectLength(m)Width(m)Height(m)ImageBook 0.203 0.028 0.245Amazon box 0.210 0.288 0.045Keto box 0.193 0.259 0.046Kellogg’s box 0.190 0.045 0.284Chex box 0.195 0.050 0.285Cheerios box 0.195 0.052 0.285Bran flakes box 0.198 0.297 0.057Power drill box 0.250 0.073 0.279Yellow plush toy 0.065 0.255 0.055Blue plush toy 0.19 0.284 0.045Continued on next page15Table 5 – Continued from previous pageObjectLength(m)Width(m)Height(m)ImageRed plush toy 0.168 0.24 0.051B.4 Subgoal Selection for Behavior PrimitivesFigure 9 illustrates the typical subgoals selected during our real-world experiments. In our approach,subgoals serve as intermediate waypoints or milestones within the task. They provide a sequence ofspecific, short-term objectives for the robot to achieve, ultimately guiding it towards the final desiredoutcome of the task. This structure helps in breaking down a complex, long-horizon task into moremanageable segments, making it easier for the robot to execute and adapt.(a) Initial setup. (b) Sweeping subgoal. (c) Pushing subgoal. (d) Transporting subgoal.Figure 9: Typical subgoals selected during experiments. From left to right: (a) Initial setup, (b) Sweeping, (c)Pushing, (d) Transporting.16 |
psyvs5wdAV | Equivariant Motion Manifold PrimitivesByeongho Lee∗1Yonghyeon Lee∗2Seungyeon Kim1Minjun Son1Frank C. Park11Seoul National University2Korea Institute for Advanced Study (KIAS){bhlee, ksy, mjson }@robotics.snu.ac.kr ylee@kias.re.kr fcp@snu.ac.krAbstract: Existing movement primitive models for the most part focus on repre-senting and generating a single trajectory for a given task, limiting their adaptabil-ity to situations in which unforeseen obstacles or new constraints may arise. Inthis work we propose Motion Manifold Primitives (MMP), a movement primitiveparadigm that encodes and generates, for a given task, a continuous manifold oftrajectories each of which can achieve the given task. To address the challengeof learning each motion manifold from a limited amount of data, we exploit in-herent symmetries in the robot task by constructing motion manifold primitivesthat are equivariant with respect to given symmetry groups. Under the assumptionthat each of the MMPs can be smoothly deformed into each other, an autoencoderframework is developed to encode the MMPs and also generate solution trajecto-ries. Experiments involving synthetic and real-robot examples demonstrate thatour method outperforms existing manifold primitive methods by significant mar-gins. Code is available at https://github.com/dlsfldl/EMMP-public .Keywords: Movement primitives, Manifold, LfD, Equivariance1 IntroductionLearning basic motion skills as movement primitives has been an enduring focus of learningfrom demonstration (LfD) research [1, 2, 3]. A primary challenge is constructing movementprimitive models adaptable to diverse situations, such as when unforeseen obstacles or new con-straints emerge. Current approaches to movement primitives encompass dynamic movement primi-tives [4, 5, 6, 7, 8, 9, 10, 11, 12, 13], stable dynamical systems [14, 15, 16, 17, 18, 19, 20, 21, 22],methods based on Gaussian processes [23, 24, 25] and Gaussian mixture models [26, 27, 28, 29],along with other methods [30, 31, 32].The limited adaptability of existing primitive models largely stems from their design which encodesand generates a single trajectory for a specific task, since they have no alternatives when their pri-mary trajectory becomes infeasible in new environments (e.g. when an unexpected obstacle blocksthe trajectory). Although dynamical system-based methods can integrate, for instance, obstacleavoidance potential function terms [33, 34, 35, 36], the resulting motions might violate other taskconstraints. For adaptable motion primitives, a method that encodes multiple trajectories for a singletask is essential.In this paper, we propose to learn a continuous manifold of motion trajectories that can perform thegiven task, which we refer to as Motion Manifold Primitives (MMP). As the MMP encodes multiplesuccessful trajectories, even if some trajectories are obstructed by obstacles or violate constraints,alternative feasible trajectories remain accessible within the MMP. As such, the MMP is highlyadaptable, although more diverse demonstration data are needed for training, compared to the singletrajectory-based primitives.Given a set of task-trajectory paired data – where multiple demonstration trajectories are collectedfor a single task parameter τ–, our objective is to learn a set of manifold primitives {Mτ}for∗The two lead co-authors contributed equally.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.allτ. However, given a limited amount of demonstration data, learning accurate manifolds andtheir boundaries is a very challenging task. In fact, the TC-V AE [37] is the first work to adopt thismanifold primitives approach but shows less-than-desirable performance given a small dataset as weshow later in our experiments.Figure 1: An illustration ofthe motion manifold primitives andequivariant transformation.In this paper, we develop a data-efficient motion man-ifold primitives learning algorithm, where we adopt theautoencoder-based manifold learning framework [38, 39, 40,41, 42, 43, 44]. First, we propose Equivariant Motion Mani-fold Primitives (EMMP) , which takes into account the inher-ent symmetry in robot tasks. For example, consider a water-pouring task where the initial cup and bottle’s positions aredefined as the task parameter. Given a symmetry transforma-tion on τthat preserves the relative positional relation betweenthe cup and bottle (e.g., rotating bottle around the cup), the setof water-pouring trajectories, the motion manifold primitivesMτ, should also be transformed in a consistent manner, more precisely, equivariantly (see Figure 1).We show that using an invariant encoder and equivariant decoder can guarantee the equivariance ofthe MMP and propose a strategy to construct invariant and equivariant mappings. Meanwhile, equiv-ariance has been increasingly recognized as a pivotal factor in robotics tasks in general [45, 46, 47].Second, to further enhance data efficiency, we find a shared latent coordinate space Z, by assumingthatMτfor all τare homeomorphic (i.e., can be smoothly deformed into each other). In practice, weconsider the latent coordinate variable z∈ Z to be independent of τ. This means that p(z|τ) =p(z)for all τ, making the support of p(z)the shared latent coordinate space Z. To enforce this condition,we newly propose an independence regularization term in autoencoder training.Through comprehensive experiments involving synthetic and real-world robot water-pouring exper-iments, we compare our EMMP model with the existing manifold primitives method TC-V AE [37]through a systematic evaluation on (i) manifold learning, (ii) independence between zandτ, (iii)density learning, and (iv) success rate. We find that our method significantly outperforms TC-V AE.Further, an in-depth ablation study reveals that the use of an invariant encoder and equivariant de-coder is the primary factor driving this performance improvement.2 MMPs: Motion Manifold PrimitivesFigure 2: An illustration of a task parameter τ, feasible trajectoriesthat achieve the task, and a manifold that the trajectories form. Theresulting manifold Mτconsists of two disjoint components.In this section, given a con-figuration space Q, we con-sider a motion as a dis-crete and fixed-length con-figuration trajectory x={qt∈ Q}Tt=1; the spaceof all trajectories is denotedbyX:=QT. Let a com-pact space Trepresent aspace of parameters that define specific tasks. We assume that, for each τ∈ T, a set of motionsthat can perform the given task forms an m-dimensional differentiable manifold, which we referto as the Motion Manifold Primitives (MMP) denoted by Mτ, and that is a submanifold lying inthe ambient space X. We assume that we are given a set of task-trajectory paired data, denoted byD:={(τi, xij)∈ T × X} i=1,...,N,j =1,...,M i, where Mirepresents the total number of trajectoriesfor the task parameter τiandxijis sampled from Mτi. For readers unfamiliar with geometric con-cepts such as manifold and homeomorphism, we include brief introductions of them in Appendix C.22.1 MMP described via Manifold and DensityThis section introduces how to represent the set of motion manifold primitives Mτ⊂ X for allτ∈ T. First, we assume that each manifold primitive Mτcan be parametrized by a nonlinear mapfτ:Rm→ X with a coordinate space Zτ⊂RmasMτ=fτ(Zτ). To represent the set of motionmanifold primitives simultaneously, we consider a differentiable mapping f:Rm× T → X suchthatf(z, τ) := fτ(z). Second, to represent each coordinate space Zτ, we employ a conditionalprobability density function in Rmgiven τdenoted by p(z|τ). Then, we consider the support ofp(z|τ)asZτ. As a result, each motion manifold for τis represented as Mτ=f(Zτ, τ); the motionmanifold primitives Mτis said to be parametrized by the mapping fand the density p(z|τ)orparametrized by fandZτ(see Figure 2).2.2 Autoencoder-based Manifold and Density LearningWe adopt the autoencoder framework for learning Mτviaf(z, τ)andp(z|τ), where fis consideredas the decoder and pas the latent space density in Rm. An additional component that specifies thecoordinates of an input trajectory x∈ M τneeds to be introduced, called an encoder, and we denoteit byg:X × T → Rmsuch that z=g(x, τ). We use parametric models (e.g., neural networks) forencoder gφ, decoder fθ, and density pγ(z|τ), where θ,φ, andγrepresent the model parameters. Wepropose a two-step approach where we first learn the coordinate systems, i.e., fθ, gφ, and then learnthe density, i.e., pγ(z|τ).The standard autoencoder reconstruction loss can be employed to learn fθ, gφ, that is an expectationofd2X(fθ(gφ(xij, τi), τi), xij)where dX(·,·)is a proper distance measure on X. Minimizing thereconstruction loss results in that fθ(gφ(Mτ, τ), τ)≈ M τfor the ground truth manifold Mτ.fθ(·, τ)becomes a coordinate system for Mτandgφ(Mτ, τ)becomes the coordinate space Zτ.Secondly, we can learn pγ(z|τ)given a trained encoder gφvia standard likelihood maximizationframework, i.e., maximizing an expectation of logpγ(gφ(xij, τi)|τi).2.3 Homeomorphic Manifold AssumptionLearning densities for all τ, i.e., p(z|τ)is challenging given the small training dataset D. In thissection, we introduce a homeomorphic manifold assumption to make the density learning problemmore tractable. We assume that Mτfor each τare homeomorphic to each other and there exists alatent coordinate variable zstatistically independent of τ, i.e.p(z) =p(z|τ)for all τ, which resultsin a shared latent coordinate space Z=Zτfor all τ. The training of autoencoder and density canbe greatly simplified under this assumption.First, we can restrict the encoder’s input space to X, i.e., g:X → Rm, because τshould notcontribute to z. Second, p(z|τ)can be replaced by a shared model p(z). Using g(x)andp(z), wemay sequentially train the autoencoder and density, however, the reconstruction loss alone does notguarantee the statistical independence between zandτ, hence learning the density p(z)can fail.In this section, we introduce an independence regularization term for autoencoder training, so that zandτbecome independent and Z=Zτfor all τ:R(θ, φ) :=E(·,xij)∈D∥gφ(xij)−gφ(fθ(gφ(xij), τ))∥2E(·,xij)∈D∥gφ(xij)∥2, (1)where τis randomly sampled from the uniform distribution on T. This loss enforces that g(f(z, τ))does not depend on τ, and eventually, τdoes not contribute to z, and becomes independent of z. Thedenominator makes the regularization term invariant to the latent value scale. This regularizationtermRis added to the reconstruction loss with a proper regularization coefficient.3 EMMPs: Equivariant Motion Manifold PrimitivesIn this section, our goal is to construct the motion manifold primitive Mτthat transforms equivari-antly when symmetry transformations are applied to τ. We denote a symmetry group by Hwhere3the group operation between two elements h1, h2∈His written as h1h2∈H. Then we assumesymmetry transformations are defined as group actions. Two symmetry transformations in TandX × T are considered, where we use the same symbol to denote the group action in Tbyτ7→h·τand that in X × T by(x, τ)7→h·(x, τ). Let·xdenote the xcomponent, i.e.,(x, τ)x=x.The ground truth motion manifold primitive is denoted by Mτ, while the learned motion manifoldprimitive is denoted by cMτparametrized by a decoder f(z, τ)andp(z|τ). We include preliminaryknowledge on group, equivariance, and invariance in Appendix C.3.1 Invariant Encoder and Equivariant DecoderWe begin this section with the definition of the equivariant motion manifold primitive:Definition 3.1. Suppose Mτis a motion manifold primitive. Given a transformed task parameterτ7→h·τ, if the motion manifold Mτis equivariantly transformed, i.e., Mh·τ={h·(x, τ)x|x∈Mτ},then the primitive is called an Equivariant Motion Manifold Primitive (EMMP); see Figure 3.Figure 3: Given a symmetry transformation to thetask parameter τ7→h·τwhere the relative dis-tances between the robot, goal, and obstacle arepreserved, the MMP is equivariantly transformed.We show that using an invariant encoder andan equivariant decoder can guarantee the equiv-ariance of the resulting MMP; we first providetheir definitions:Definition 3.2. An encoder g:X ×T → Rmisinvariant if g(h·(x, τ)) =g(x, τ)for all h∈Handx∈ X.Definition 3.3. A decoder f:Rm× T → Xis equivariant if f(z, h·τ) =h·(f(z, τ), τ)xfor all h∈Handτ∈ T.Denoting the ground truth motion manifold byMτ, the latent coordinate space Zτdefined asthe support of p(z|τ)can be specified by an en-coder g(x, τ)as follows: Zτ= (Sx∈Mτg(x, τ))org(Mτ, τ). IfMτis equivariant, then aninvariant encoder produces invariant coordinate space:Proposition 3.1. Suppose Mτis an EMMP . If gis invariant, then Zτ=Zh·τfor all h∈H.In addition to the encoder invariance condition, the learned MMP parametrized by a decoder f(z, τ)andZτ, denoted by cMτ, is equivariant if a decoder fis equivariant:Proposition 3.2. Suppose Mτis an EMMP . If gis invariant and fis equivariant, then the MMPparametrized by fandZτ, i.e.,cMτ=f(Zτ, τ)whereZτ=g(Mτ, τ), is equivariant.By constructing an invariant encoder and an equivariant decoder, we can ensure the equivariance ofthe learned motion manifold primitives. When the ground truth MMP is equivariant, it is reasonableto expect that this equivariance guarantee would enhance the accuracy of manifold learning.3.2 Construction of Invariant and Equivariant MappingsIn this section, we propose a method for converting arbitrary encoder and decoder models to onesthat are invariant and equivariant to the symmetry group H. Let Gφ:X × T → RmandFθ:Rm× T → X be arbitrary parametric models for encoder and decoder, respectively. Assumingwe can find an equivariant map ̄h:T → Hsuch that ̄h(h·τ) =h ̄h(τ)for all h∈H, τ∈ T,aninvariant encoder and equivariant decoder can be constructed as follows:Proposition 3.3. An encoder g:X × T → Rmdefined as gφ(x, τ) := Gφ( ̄h(τ)−1·(x, τ))isinvariant and a decoder f:Rm×T → X defined as fθ(z, τ) := ̄h(τ)·(Fθ(z, ̄h(τ)−1·τ), ̄h(τ)−1·τ)xis equivariant.4How to construct the map ̄his problem-specific. As an example, let Xbe the set of 3D point clouddata – a point x∈ X is of the form x={x1, . . . , x N}– and Hbe the group of translations in R3. Inthis case, one possible H-equivariant ̄h(x)is the point cloud centroid: ̄h(x) :=1NPNi=1xi, whoseinverse would be ̄h(x)−1=− ̄h(x). More examples are in Appendix D.2.1 and Appendix D.3.1.When we adopt the homeomorphic manifold assumption and use a parametric model Gφ:X →Rm, the invariant encoder can be defined as follows: gφ(x, τ) :=Gφ( ̄h(τ)−1·(x, τ)x),where gφneeds to take τas an input while Gφdoes not.4 ExperimentsIn this section, we compare our EMMP framework mainly with the existing manifold primitivesmethod, TC-V AE [37], using both synthetic and real-world robot experiments. Additionally, wecompare MMP, MMP + indep, EMMP, and EMMP + indep, where the EMMP uses an invariantencoder and equivariant decoder, and ‘+ indep’ indicates adding the independence regularizationterm in autoencoder training. When training TC-V AE, MMP, and MMP + indep, we always applyrandom data augmentation on τ. Throughout, we use Gaussian Mixture Model (GMM) to fit p(z).Evaluation metrics : We report four evaluation metrics. First, the manifold learning accuracy ismeasured by the Reconstruction Error (RE). Second, to measure the degree of independence, wereport estimated Mutual Information (MI) between zandτ, by using MINE [48]. Third, we computethe probability density learning performance by the Negative Log-Likelihood (NLL) measured in thedata space. Lastly, we report the task Success Rate (SR), of which criteria is task-dependent. Detailson the computations of these measures are described in Appendix D.1.4.1 Goal-Reaching Task of a Planar Mobile RobotFigure 4: Demo tra-jectories.In this section, we consider a goal-reaching task for a planar mobile robot,given a cross-shaped wall, without colliding with the wall; the robot is re-quired to enter through one of the two closest passages (see Figure 4). Asshown, it is reasonable to assume a set of feasible trajectories form a con-tinuous manifold. Our goal is to learn the motion manifold primitive for anarbitrarily rotated wall and robot’s initial position.The configuration space is Q=R2, the trajectory space is X=R2T. Thetask parameter τconsists of the initial position of the mobile robot, denotedby(q1r, q2r), and the rotated angle of the wall axis ˆxwwith respect to ˆxs, de-noted by ωw(see Figure 5 Left). To make demonstration data, a human demonstrator has drawnfeasible trajectories, 6 trajectories for 75 randomly given task parameters. Figure 4 illustrates exam-ple demonstration trajectories. More details on data generation and data split for training, validation,and testing are included in Appendix D.2.2.Figure 5: Left: The task parameter of 2D mobile robot’sgoal reaching task. Right : Symmetry transformations.Symmetries: There exist three sym-metries that preserve the relative geo-metric relation between the wall andinitial mobile robot position: (i) flip-ping of the robot over the wall axisˆx, (ii) rotation of the robot around theorigin by 90, 180, and 270 degrees,and (iii) rotation of the wall and mo-bile robot around the origin by thesame amount (see Figure 5 Right ).These symmetry transformations canbe described as group actions with the symmetry group H:=p4m×SO(2), where p4mis aspecific type of wallpaper group and SO(2) is the group of 2×2rotation matrices. We de-notep4m:={(i, j)|i∈ {0,1}, j∈ {0,1,2,3}}, where i∈ {0,1}represents flipping and5j∈ {0,1,2,3}represents nπ/2rotation. Details on the group operations in Hand group actionsonTandX × T are discussed in Appendix D.2.1.Figure 6: Illustration of the (p4m,SO(2))-equivariantmap ̄h(τ).Construction of ̄h:Figure 6 visualizes theequivariant map ̄h(τ) = ( ̄h1(τ), ̄h2(τ)). ̄h1(τ)∈p4mis determined based on therobot’s position relative to the wall state asshown in Figure 6 ( Left). ̄h2(τ)∈SO(2)is determined based on the wall axis an-gleωw(see Figure 6 Right ). The ̄h(τ)isequivariant to p4m×SO(2); more detailsincluding the proof are in Appendix D.2.1.Table 1: Reconstruction Error (RE), Mutual Informa-tion (MI), and Negative Log-Likelihood (NLL); thelower, the better. Success Rate (SR); the higher, thebetter.Method RE (↓) MI ( ↓) NLL ( ↓) SR (↑)TC-V AE [37] 0.257 0.268 1.50×10453.18%MMP 0.223 0.487 1.49×10450.08%MMP + indep 0.225 0.329 1.48×10452.39%EMMP 0.223 0.082 1.25×10492.40%EMMP + indep 0.229 0.077 1.24×10486.66%Assuming a one-dimensional latent space,we train fully connected autoencoders forMMPs and EMMPs, and TC-V AE wheretemporal convolutional neural networksare used as in [37]. Table 1 shows thefour evaluation metrics, where the successrate is measured as follows: (i) we sam-ple(τ, z)from the uniform distribution onTand learned density p(z), (ii) generatetrajectories via f(z, τ), and (iii) the gener-ated trajectory is considered successful if it is consistent with the task parameter and reaches thegoal without colliding to the wall. More details are in Appendix D.2.2.Figure 7: p(z|τ)of the MMP + indepand EMMP.First of all, the EMMP methods show much higher suc-cess rates than both the MMP methods and TC-V AE,which is in large part attributed to the independence be-tween zandτand latent density fitting as shown in Ta-ble 1. As observed from the MI scores of the MMP +indep (0.329) and EMMP (0.082), using invariant andequivariant mappings is much more effective than the in-dependence regularization term at making zandτinde-pendent, which consequently makes learning p(z)easier(see Figure 7).Figure 8: Top:p(z).Bottom : Generatedtrajectories.Second, the success rates of MMPs and TC-V AE do notshow a noticeable difference. This suggests that the net-work architecture (temporal CNN or FCN) and autoen-coder method (V AE or AE) do not have a significant im-pact on performance. A further comparison using EMMPin this regard is provided in Appendix D.2.3. Third, whilethe independence regularization term slightly improvesthe success rate of MMP, for EMMP, it rather shows anegative effect. Given the already low MI of EMMP, fur-ther reducing it at the expense of RE appears to have neg-atively impacted the success rate.Figure 8 ( Top) visualizes learned latent space densitiesp(z)of TC-V AE and EMMP + indep. We select equally-spaced 6 latent points {zi}6i=1inZfor each model asshown in Figure 8 (blue points). The trajectories are thengenerated by the decoders as shown in Figure 8 ( Bottom ).The EMMP + indep shows a much higher success rate than the TC-V AE; the support of p(z)inour method has two connected components, each corresponding to the set of trajectories that passesthrough one of the two closest passages to the robot.64.2 Water-Pouring Task of a Franka Panda RobotFigure 9: Demonstration data.In this section, we consider a water-pouring task for aFranka Emika Panda robot arm; the robot is assumed tohold the bottle initially upright and is required to pour150g of water into the cup (see Figure 9 Left). As shownin Figure 9 Right , a human demonstrator provides multi-ple water-pouring trajectories, which are assumed to forma continuous manifold. Our goal is to learn the motionmanifold primitive for an arbitrary cup position, a bot-tle’s initial pose, and the amount of water in the bottle. Inparticular, depending on the amount of water in the bottle, the demonstration trajectories have verydifferent characteristics (see Figure 15 Leftin Appendix D.3.3).The configuration space is the space of the bottle pose, Q=SE(3), the trajectory space is X=SE(3)T. The task parameter τconsists of the cup position on the table (q1c, q2c)∈R2, the bottle’sinitial pose Tb∈SE(3), and the mass of the water mw∈[0.2,0.41](See Figure 10 Left). Wecollect 5 demonstration trajectories for 35 different task parameters in a total of 175 trajectories(details are included in Appendix D.3.2). The demonstration trajectories are collected by recordinga video of a human demonstrator performing the water-pouring task and extracting the bottle’s SE(3)trajectories using AprilTag [49]. Figure 9 Right shows 5 trajectories demonstrated for a given singletask parameter τ. More details on the task parameter selection, data generation, and data split fortraining, validation, test are included in Appendix D.3.2.Figure 10: Left: The task parameter of water-pouring task. Right :Symmetry transformations.Symmetries : There existthree symmetries that pre-serve the relative positionaland geometric relationshipbetween the cup and thebottle: (i) translation ofthe cup and the bottle bythe same amount, (ii) rota-tion of the bottle around thecup, and (iii) rotation of thebottle around itself (see Figure 10 Right ). The symmetry transformations can be described as groupactions with the symmetry group H:=R2×SO(2)×SO(2). Details on the group operations in Hand group actions on TandX × T are discussed in Appendix D.3.2.Figure 11: (R2,SO(2),SO(2))-equivariant module ̄h(τ).Definition of Hand ̄h: We de-fine ̄h(τ) = ( ̄h1(τ), ̄h2(τ), ̄h3(τ))asshown in Figure 11. ̄h1(τ)is deter-mined as the cup’s position as shownin Figure 11 Left. ̄h2(τ)is deter-mined based on the position of thebottle relative to the cup (see Fig-ure 11 Middle ). ̄h3(τ)is determinedbased on the bottle’s orientation relative to the base frame (see Figure 11 Right ).Assuming a two-dimensional latent space dimension, we train EMMP + indep with fully connectedneural networks and TC-V AE. Table 2 shows the four evaluation metrics (RE, MI, NLL, SR) –where the generated robot motion is considered successful if even a little water can be poured intothe cup without spilling – and one additional metric that measures the error in the amount of pouredwater (150g water has been poured in demonstration data), which we refer to as the Water-PouringError (WPE). To generate motions given a task parameter τfromT, we sample zfrom the learneddensity p(z)and generate the bottle’s trajectory via f(z, τ). If the generated SE(3) trajectory is outof the robot’s workspace, i.e., the inverse kinematics solution does not exist, then we re-sample z7until we obtain a feasible trajectory. We measure the SR and WPE with 4 task parameters each with5 samples, where we run a total of 20 generated trajectories on the real Panda robot.Table 2: Reconstruction Error(RE), Mutual Information (MI),Negative Log-Likelihood (NLL),and Water-Pouring Error (WPE);the lower, the better. Success Rate(SR); the higher, the better.Method TC-V AE EMMP + indepRE (↓) 0.183 0.129MI (↓) 0.758 0.081NLL (↓)1.18×1056.40×104SR (↑) 9/20 20 /20WPE (↓) 86.8±59.1 23.0 ±11.9As shown in Table 2, the EMMP + indep significantly outper-forms TC-V AE. The large margin in RE and MI leads to lowerNLL and a much higher task success rate of EMMP. Out of 20trials of the TC-V AE, 7 trials fail to pour water into the cup and4 trials spill water, resulting in only 9 successful pourings. Onthe other hand, EMMP + indep results in a 100% success rate.In addition, the WPE in EMMP + indep is also much lowerthan that of TC-V AE.While the EMMP’s WPE (23.0g) out of 150g seems high, wenote that this error is not caused by the manifold primitivelearning error, but rather is attributed to the error caused whenprocessing AprilTags and smoothing the trajectories. Even when we replay the demonstration tra-jectories on the robot, the water pouring error exists and it is 19.3g on average, implying 23.0g erroris not that high.Figure 12: Obstacle avoid-ance.To show the strong adaptability of our framework, we perform anobstacle avoidance task using EMMP + indep. Suppose there is anobstacle, not seen during training, that blocks some water-pouringtrajectories in the learned manifold primitives (e.g., Figure 12 Left).Since we have learned a motion manifold, not a single trajectory,even if some trajectories are blocked, we can easily find an alterna-tive collision-free trajectory from the learned manifold primitives asshown in Figure 12 Right . More details on the collision detectionand obstacle avoidance algorithms are in Appendix D.3.25 LimitationsOne of the key assumptions in our framework is the homeomorphic manifold assumption, that isMτfor all τ∈ T are homeomorphic, which may not hold depending on the problem. In suchcases, instead of p(z), we need to fit p(z|τ), which would require more demonstration data. Second,to construct an invariant encoder and equivariant decoder, we need to define an equivariant map ̄hfor a given symmetry group H. Although, in our case studies, it is relatively straightforward toconstruct ̄h, this process may not be trivial or even impossible depending on the problem. Lastly, asour experimental results show, the independence regularization term itself is not sufficient to enforceindependence between zandτ. Finding a better independence regularization method would be animportant future research direction.6 ConclusionIn this paper, we have proposed a new family of highly adaptable movement primitive models,motion manifold primitives – which is a set of trajectory manifolds {Mτ}for all task parameters τ–, and an autoencoder-based framework for learning them. To tackle the challenges in learning Mτsuch as requiring many demonstration data, (i) under the homeomorphic manifold assumption, wedevelop the motion manifold primitives framework and introduce the independence regularizationterm – where we enforce independence between zandτso that it is sufficient to learn p(z)insteadofp(z|τ)– and (ii) we propose equivariant motion manifold primitives for an arbitrary symmetrygroup Hin the robot task and a method to parameterize it by constructing an invariant encoder andequivariant decoder. Extensive experiments have confirmed the strong adaptability of our frameworkand that the equivariant manifold modeling is highly effective at learning accurate Mτ, which leadsto superior performance compared to the existing method by a significant margin.8AcknowledgmentsB. Lee, S. Kim, and F. C. Park were supported in part by SRRC NRF grant RS-2023-00208052,IITP-MSIT grant 2021-0-02068 (SNU AI Innovation Hub), IITP-MSIT grant 2022-0-00480 (Train-ing and Inference Methods for Goal-Oriented AI Agents), KIAT grant P0020536 (HRD Program forIndustrial Innovation), ATC+ MOTIE Technology Innovation Program grant 20008547, SNU-AIIS,SNU-IAMD, SNU BK21+ Program in Mechanical Engineering, and SNU Institute for EngineeringResearch. Y . Lee was the beneficiary of an individual grant from CAINS supported by a KIAS Indi-vidual Grant (AP092701) via the Center for AI and Natural Sciences at Korea Institute for AdvancedStudy.References[1] H. Ravichandar, A. S. Polydoros, S. Chernova, and A. Billard. Recent advances in robotlearning from demonstration. Annual review of control, robotics, and autonomous systems , 3:297–330, 2020.[2] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[3] Z. Zhu and H. Hu. Robot learning from demonstration in robotic assembly: A survey. Robotics ,7(2):17, 2018.[4] M. Saveriano, F. J. Abu-Dakka, A. Kramberger, and L. Peternel. Dynamic movement primi-tives in robotics: A tutorial survey. arXiv preprint arXiv:2102.03861 , 2021.[5] A. J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, and S. Schaal. Dynamical movementprimitives: learning attractor models for motor behaviors. Neural computation , 25(2):328–373, 2013.[6] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Trajectory formation for imitation with nonlineardynamical systems. In Proceedings 2001 IEEE/RSJ International Conference on IntelligentRobots and Systems. Expanding the Societal Role of Robotics in the the Next Millennium (Cat.No. 01CH37180) , volume 2, pages 752–757. IEEE, 2001.[7] A. J. Ijspeert, J. Nakanishi, and S. Schaal. Learning rhythmic movements by demonstrationusing nonlinear oscillators. In Proceedings of the ieee/rsj int. conference on intelligent robotsand systems (iros2002) , number CONF, pages 958–963, 2002.[8] S. Schaal, P. Mohajerian, and A. Ijspeert. Dynamics systems vs. optimal control—a unifyingview. Progress in brain research , 165:425–445, 2007.[9] A. Pervez, A. Ali, J.-H. Ryu, and D. Lee. Novel learning from demonstration approach forrepetitive teleoperation tasks. In 2017 IEEE World Haptics Conference (WHC) , pages 60–65.IEEE, 2017.[10] Y . Fanger, J. Umlauft, and S. Hirche. Gaussian processes for dynamic movement primitiveswith application in knowledge-based cooperation. In 2016 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , pages 3913–3919. IEEE, 2016.[11] J. Umlauft, Y . Fanger, and S. Hirche. Bayesian uncertainty modeling for programming bydemonstration. In 2017 IEEE International Conference on Robotics and Automation (ICRA) ,pages 6428–6434. IEEE, 2017.[12] A. Pervez, Y . Mao, and D. Lee. Learning deep movement primitives using convolutionalneural networks. In 2017 IEEE-RAS 17th international conference on humanoid robotics (Hu-manoids) , pages 191–197. IEEE, 2017.9[13] S. Bahl, M. Mukadam, A. Gupta, and D. Pathak. Neural dynamic policies for end-to-endsensorimotor learning. Advances in Neural Information Processing Systems , 33:5058–5069,2020.[14] S. M. Khansari-Zadeh and A. Billard. Learning stable nonlinear dynamical systems with gaus-sian mixture models. IEEE Transactions on Robotics , 27(5):943–957, 2011.[15] K. Neumann, A. Lemme, and J. J. Steil. Neural learning of stable dynamical systems based ondata-driven lyapunov candidates. In 2013 IEEE/RSJ International Conference on IntelligentRobots and Systems , pages 1216–1222. IEEE, 2013.[16] S. M. Khansari-Zadeh and A. Billard. Learning control lyapunov function to ensure stabilityof dynamical system-based robot reaching motions. Robotics and Autonomous Systems , 62(6):752–765, 2014.[17] A. Lemme, K. Neumann, R. F. Reinhart, and J. J. Steil. Neural learning of vector fields forencoding stable dynamical systems. Neurocomputing , 141:3–14, 2014.[18] K. Neumann and J. J. Steil. Learning robot motions with stable dynamical systems underdiffeomorphic transformations. Robotics and Autonomous Systems , 70:1–15, 2015.[19] C. Blocher, M. Saveriano, and D. Lee. Learning stable dynamical systems using contractiontheory. In 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence(URAI) , pages 124–129. IEEE, 2017.[20] N. Figueroa and A. Billard. A physically-consistent bayesian non-parametric mixture modelfor dynamical system learning. In CoRL , pages 927–946, 2018.[21] V . Sindhwani, S. Tu, and M. Khansari. Learning contracting vector fields for stable imitationlearning. arXiv preprint arXiv:1804.04878 , 2018.[22] J. Z. Kolter and G. Manek. Learning stable deep dynamics models. Advances in neural infor-mation processing systems , 32, 2019.[23] G. Maeda, M. Ewerton, T. Osa, B. Busch, and J. Peters. Active incremental learning of robotmovement primitives. In Conference on Robot Learning , pages 37–46. PMLR, 2017.[24] N. Jaquier, D. Ginsbourger, and S. Calinon. Learning from demonstration with model-basedgaussian process. In Conference on Robot Learning , pages 247–257. PMLR, 2020.[25] M. Schneider and W. Ertel. Robot learning by demonstration with local gaussian processregression. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems ,pages 255–260. IEEE, 2010.[26] S. Calinon. A tutorial on task-parameterized movement learning and retrieval. Intelligentservice robotics , 9:1–29, 2016.[27] D. A. Duque, F. A. Prieto, and J. G. Hoyos. Trajectory generation for robotic assembly opera-tions using learning by demonstration. Robotics and Computer-Integrated Manufacturing , 57:292–302, 2019.[28] C. Yang, C. Chen, N. Wang, Z. Ju, J. Fu, and M. Wang. Biologically inspired motion modelingand neural control for robot learning from demonstrations. IEEE Transactions on Cognitiveand Developmental Systems , 11(2):281–291, 2018.[29] S. Chernova and M. Veloso. Confidence-based policy learning from demonstration using gaus-sian mixture models. In Proceedings of the 6th international joint conference on Autonomousagents and multiagent systems , pages 1–8, 2007.10[30] A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann. Probabilistic movement primitives.Advances in neural information processing systems , 26, 2013.[31] Y . Huang, L. Rozo, J. Silv ́erio, and D. G. Caldwell. Kernelized movement primitives. TheInternational Journal of Robotics Research , 38(7):833–852, 2019.[32] T. Osa, A. M. G. Esfahani, R. Stolkin, R. Lioutikov, J. Peters, and G. Neumann. Guidingtrajectory optimization by demonstrated distributions. IEEE Robotics and Automation Letters ,2(2):819–826, 2017.[33] D.-H. Park, H. Hoffmann, P. Pastor, and S. Schaal. Movement reproduction and obstacleavoidance with dynamic movement primitives and potential fields. In Humanoids 2008-8thIEEE-RAS International Conference on Humanoid Robots , pages 91–98. IEEE, 2008.[34] H. Hoffmann, P. Pastor, D.-H. Park, and S. Schaal. Biologically-inspired dynamical systemsfor movement generation: Automatic real-time goal adaptation and obstacle avoidance. In2009 IEEE international conference on robotics and automation , pages 2587–2592. IEEE,2009.[35] S. M. Khansari-Zadeh and A. Billard. A dynamical system approach to realtime obstacleavoidance. Autonomous Robots , 32:433–454, 2012.[36] M. Ginesi, D. Meli, A. Calanca, D. Dall’Alba, N. Sansonetto, and P. Fiorini. Dynamic move-ment primitives: V olumetric obstacle avoidance. In 2019 19th international conference onadvanced robotics (ICAR) , pages 234–239. IEEE, 2019.[37] M. Noseworthy, R. Paul, S. Roy, D. Park, and N. Roy. Task-conditioned variational autoen-coders for learning movement primitives. In Conference on robot learning , pages 933–944.PMLR, 2020.[38] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[39] Y . Lee, H. Kwon, and F. Park. Neighborhood reconstructing autoencoders. Advances in NeuralInformation Processing Systems , 34:536–546, 2021.[40] Y . Lee, S. Yoon, M. Son, and F. C. Park. Regularized autoencoders for isometric representationlearning. In International Conference on Learning Representations , 2022.[41] C. Jang, Y . Lee, Y .-K. Noh, and F. C. Park. Geometrically regularized autoencoders for non-euclidean data. In The Eleventh International Conference on Learning Representations .[42] Y . Lee, S. Kim, J. Choi, and F. Park. A statistical manifold framework for point cloud data. InInternational Conference on Machine Learning , pages 12378–12402. PMLR, 2022.[43] Y . Lee and F. C. Park. On explicit curvature regularization in deep generative models. arXivpreprint arXiv:2309.10237 , 2023.[44] Y . Lee. A geometric perspective on autoencoders. arXiv preprint arXiv:2309.08247 , 2023.[45] S. Kim, B. Lim, Y . Lee, and F. C. Park. Se (2)-equivariant pushing dynamics models fortabletop object manipulations. In Conference on Robot Learning , pages 427–436. PMLR,2023.[46] D. Wang, R. Walters, and R. Platt. So(2)-equivariant reinforcement learning. arXiv preprintarXiv:2203.04439 , 2022.[47] H. Huang, D. Wang, R. Walters, and R. Platt. Equivariant transporter network. arXiv preprintarXiv:2202.09400 , 2022.11[48] M. I. Belghazi, A. Baratin, S. Rajeshwar, S. Ozair, Y . Bengio, A. Courville, and D. Hjelm.Mutual information neural estimation. In International conference on machine learning , pages531–540. PMLR, 2018.[49] E. Olson. Apriltag: A robust and flexible visual fiducial system. In 2011 IEEE internationalconference on robotics and automation , pages 3400–3407. IEEE, 2011.[50] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y . Bengio. Contractive auto-encoders: Explicitinvariance during feature extraction. In Proceedings of the 28th international conference oninternational conference on machine learning , pages 833–840, 2011.[51] A. Creswell, Y . Mohamied, B. Sengupta, and A. A. Bharath. Adversarial information factor-ization. arXiv preprint arXiv:1711.05175 , 2017.[52] M. M. Bronstein, J. Bruna, T. Cohen, and P. Veli ˇckovi ́c. Geometric deep learning: Grids,groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478 , 2021.[53] S. Albawi, T. A. Mohammed, and S. Al-Zawi. Understanding of a convolutional neural net-work. In 2017 international conference on engineering and technology (ICET) , pages 1–6.Ieee, 2017.[54] T. Cohen and M. Welling. Group equivariant convolutional networks. In International confer-ence on machine learning , pages 2990–2999. PMLR, 2016.[55] T. S. Cohen, M. Geiger, J. K ̈ohler, and M. Welling. Spherical cnns. arXiv preprintarXiv:1801.10130 , 2018.[56] A. Simeonov, Y . Du, A. Tagliasacchi, J. B. Tenenbaum, A. Rodriguez, P. Agrawal, and V . Sitz-mann. Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In2022 International Conference on Robotics and Automation (ICRA) , pages 6394–6400. IEEE,2022.[57] S. Kim, T. Ahn, Y . Lee, J. Kim, M. Y . Wang, and F. C. Park. Dsqnet: A deformable model-based supervised learning algorithm for grasping unknown occluded objects. IEEE Transac-tions on Automation Science and Engineering , 2022.12A Proofs of PropositionsProof of Group-Equivariance of (3.1) .The equivariance of the ground truth EMMP and the invari-ance of gproves the proposition as follows:Zh·τ=g(Mh·τ, h·τ) =g([x∈Mτh·(x, τ)) =g([x∈Mτ(x, τ)) =g(Mτ, τ) =Zτ. (2)Proof of Group-Equivariance of (3.2) .The equivariance of fand invariance of gproves the propo-sition as follows:cMh·τ=f(Zh·τ, h·τ) =[z∈Zτh·(f(z, τ), τ)x=[x∈cMτh·(x, τ)x={h·(x, τ)x|x∈cMτ}. (3)Proof of Group-Equivariance of (3.3) .Invariance of gφcan be seen by the equivariance of ̄h, asfollows:gφ(h·(x, τ)) =Gφ( ̄h(h·τ)−1·(h·(x, τ))) = Gφ((h ̄h(τ))−1h)·(x, τ)) =Gφ( ̄h(τ))−1·(x, τ)) =gφ(x, τ) (4)Equivariance of fθcan be seen as follows:fθ(z, h·τ) = ̄h(h·τ)·(Fθ(z, ̄h(h·τ)−1·(h·τ)), ̄h(h·τ)−1·(h·τ))x=h· ̄h(τ)·(Fθ(z, ̄h(τ)−1(τ)), ̄h(τ)−1·τ)x= [h·(fθ(z, τ), τ)]x (5)B Related WorksIn this section, we provide an overview of areas related to our work.B.1 Movement PrimitivesIn this section, we consider any form of mathematical representation used to describe motions (e.g.,trajectories) that perform a given task—as specified by a task parameter variable τ—as move-ment primitives. Dynamic movement primitives encode motion trajectories in the form of time-dependent nonlinear dynamical systems. These systems consist of mass-spring-damper systemsand parametric force terms, with their task parameters defined by the initial and final configura-tions. [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Stable dynamical system-based approaches use state-dependentdynamical systems that are globally asymptotically stable [14, 15, 16, 17, 18, 19, 20, 21, 22]. In thesedynamical systems-based methods, the initial and goal configurations can be considered as the taskparameters, and the solution trajectory that connects the two configurations is the motion describedby those systems. ProMP [30] represents motions as a distribution over trajectories. By conditioningthe distribution with the initial configuration, a motion trajectory is achieved. In ProMP, the initialand final configurations can be considered as task parameters. TP-GMM [26] tries to adapt to unseentask parameters by encoding trajectories as GMM seen from multiple frames. The task parametersof TP-GMM are the frames of GMMs. Given unseen frames, TP-GMM generates trajectories bycalculating joint distributions between GMMs. [23, 24, 25] parameterize the demonstration trajec-tories using Gaussian Pross Regression, and [28, 27, 29] represents the motions using GMR (notask-parameterization exists).However, the task parameters of most movement primitives are strictly restricted to the initial andfinal configurations. This limits the range of tasks that can be parameterized. In the case of waterpouring, the cup’s position and the amount of water can not be represented by the initial and finalconfigurations. By adopting conditional variational autoencoder’s structure, MMPs and EMMPsprovide freedom of defining task parameters.13B.2 Autoencoder-Based Manifold LearningAutoencoders have gained prominence in recent years for identifying and generating samples froma given data distribution’s underlying low-dimensional manifold structure. The main reason autoen-coders are frequently adopted for manifold learning is that they learn the latent space coordinatesalong with the manifolds. To learn more accurate manifolds, researchers have introduced additionalregularization terms [38, 39, 40, 42, 43, 41, 50, 44]. For a specific structure of conditional variationalautoencoder, where the decoder gets an additional conditional parameter, the need to disentangle theconditional inputs and latent values has risen. [51] introduced a regularization term to disentangleinput spaces of its decoder, by solving adding an auxiliary neural network to estimate conditionalinputs from latent values, and regularizing the autoencoder by making it harder for the auxiliarynetwork to estimate. However, unlike the independence regularization term that we introduced, theregularization term does not necessarily guarantee independence between the two spaces.B.2.1 Autoencoder-Based Motion Manifold PrimitivesIn this section, we introduce an existing motion manifold primitive framework called TC-V AE [37].TC-V AE aims to parameterize the motion manifold given a task parameter based on autoencoderframeworks. As TC-V AE adopts the structure of [51], the decoder of it takes additional task param-eter inputs other than the latent value inputs. TC-V AE also adopts the regularization term of [51]for disentangling the task parameters and the latent values, which still shares the shortcoming of notguaranteeing independence between latent space and the conditional input space.B.3 Equivariant Models in RoboticsInvariance and equivariance properties have played a role in deep learning models as an inductivebias to generalize well and be trained data efficiently [52]. Translation equivariance in convolu-tional neural networks (CNNs) has been effective for image recognition tasks [53]. Group equiv-ariant CNNs have expanded the equivariance in CNNs to more complex equivariance, e.g. SO(3)-equivariance achieved by Spherical CNNs [54, 55]. In robot manipulation tasks, [56] proposed anSE(3)-equivariant object representation, and [45] introduced a SE(2)-equivariant dynamics modellearning for pushing manipulation. Most of the existing equivariant models in robotics are restrictedto certain types of groups, whereas our work can be applied to tasks with arbitrary group symmetries.C Geometric PreliminariesThis section provides preliminary knowledge of geometric tools used in the paper.C.1 Manifold HypothesisReal-world observations often require a large number of variables to represent numerically. For ex-ample, an SE(3)-trajectory of length 500 lives in a high-dimensional data space R8000. Dealing withsuch high-dimensional data is very challenging as the amount of data needed grows exponentiallywith the dimensionality, known as the curse of dimensionality.The manifold hypothesis states that high-dimensional data (e.g. trajectory) approximately lie onsome lower-dimensional manifold embedded in high-dimensional space, which suggests that thehigh-dimensional data can be in fact described by a relatively small number of variables. For exam-ple, to describe points on a two-dimensional sphere – which are represented as unit vectors in R3–,we only need two variables, e.g., the spherical (θ, φ)-coordinates.Of particular relevance to our paper, consider a set of length ntrajectories in R2, which start from therobot and end at the star, as shown in Figure 13 (i.e., trajectories A, B, C, D, E). We note that thesefive trajectories are elements in the high-dimensional trajectory data space R2×n. However, it is clearthat they do not fill up the entire space. Rather it is suspected that they form a lower-dimensionalspace. Each trajectory may approximately be represented with one variable that indicates how much14Figure 13: Trajectories in high-dimensional data space lie on a one-dimensional manifold.it bends down or up compared to the straight line between the robot and the star, meaning that thesefive trajectories approximately lie on a one-dimensional manifold.C.2 HomeomorphismA homeomorphism is a continuous, bijective function that has a continuous inverse function betweentwo topological spaces (for this paper, two manifolds). Two manifolds are said to be homeomorphicif there exists a homeomorphism between the two manifolds. Intuitively, two manifolds are home-omorphic if one can be smoothly deformed into another. For example, a sphere can be smoothlydeformed into an ellipsoid, hence a sphere and an ellipsoid are homeomorphic. However, there isno way to smoothly deform a sphere into a torus, which indicates that a sphere and a torus are nothomeomorphic.Suppose that there exist two m-dimensional manifolds M1andM2. Let the latent space of M1beZ1⊆Rmand the coordinate space of M2beZ2⊆Rm. Letf1:Z1→ M 1andf2:Z2→ M 2beinvertible maps satisfying f1(Z1) =M1andf2(Z2) =M2. IfM1andM2are homeomorphic,i.e. there exists g:M1→ M 2such that g(M1) =M2,Z2= (f−12◦g◦f1)(Z1)is satisfied. M1andM2can have shared latent space by replacing f2asf′2(·) := ( g◦f1)(·).C.3 Group and Group ActionA group His a non-empty set, combined with a group operation ∗:H×H→H, where we simplydenote it by h1∗h2=h1h2. A group Hmust satisfy four conditions:• The group Hcontains an identity e.• The group Hcontains inverses, i.e., h−1h=hh−1=efor all h∈H.• The group operation is associative, i.e., h1(h2h3) = (h1h2)h3for all h1, h2, h3∈H.• The group His closed under operation, i.e., h1h2∈Hfor all h1, h2∈H.A group action ·:H× X → X is a function defined on the product space of a group and a setsatisfying two conditions:e·x=x,h1·(h2·x) = (h1h2)·xfor all h1, h2∈H,where e∈His the identity.C.4 Equivariance and InvarianceGiven a group H, a function f:X→Yand group actions ·defined on XandY, the function fissaid to be H-equivariant if it satisfiesf(h·x) =h·f(x),15for all x∈Xandh∈H.fis said to be H-invariant if it satisfiesf(h·x) =f(x),for all x∈Xandh∈H. Invariance is a special case of equivariance where the group actionsdefined on Yis the identity function, i.e. h·y=yfor all h∈Handy∈Y.D Experimental and Implementation DetailsThroughout the experiments, we have used RTX 2080 Ti, RTX 3080 Ti, RTX 3090 for training themodels, and each experiment takes a few hours to 10 hours depending on the model.D.1 Evaluation MetricsRecontruction Error: We measure reconstruction error in the test dataset using following equation:Reconstruntion loss =vuut1NNXi=11MiMiXj=11Td2X(fθ(gφ(xij, τi), τi), xij). (6)Latent-Task Dataset: To calculate mutual information and negative log-likelihood, we define adataset of (z, τ)paired dataset. To build the dataset large enough, we first randomly augment (x, τ)pairs in the training dataset 100 times. Then, the every zis the encoded values from (x, τ);z=gφ(x, τ).Mutual Information: Mutual information between zandτis measured using Mutual Informa-tion Neural Estimator (MINE) [48]. MINE estimates by, which estimates mutual information bymaximizing its lower bound, using the Donsker-Varadhan representation:DKL(Z||T )≥supF∈FEZ[F]−log (ETeF), (7)where Fis any class of functions F: Ω→R. In our case, Ω =Z × T . By replacing Fbyparametric family FΘ, the mutual information is estimated as follows:IΘ(Z,T) = supθ∈ΘEp(z,τ)Fθ−log (Ep(z)p(τ)eFθ). (8)We train MINE using the latent-task dataset for 1,500 iterations with a batch size of 5,000 equallyfor all models.Negative Log-Likelihood: Given (z, τ)from the latent-task dataset, we calculate negative log-likelihood −log(pMτ(gφ(z)))in trajectory space X, using the following equation:pMτ(gφ(z, τ)) =pZτ(z)|det [JTgφJgφ]|−12, (9)where Jgφdenotes∂gφ∂z(z, τ).D.2 Planar Mobile Robot ExperimentD.2.1 Formulas and ProofsTask Parameter Space: Recall that the configuration space is Q=R2, the trajectory space isX=R2T. The task parameter τconsists of the initial position of the mobile robot, denoted by(q1r, q2r), and the rotated angle of the wall axis ˆxwwith respect to ˆxs, denoted by ωw.To uniformly sample from Twe make Tcompact by restricting the initial position of the motilerobot to be on a disk whose center is the origin, the inner radius is 5, and the outer radius is 10.Also, we have limited the wall rotation axis to −π4≤ωw<π4to make the problem easier fornon-equivariant methods e.g. TC-V AE. since the wall’s geometries are identical every 90 degrees,the wall still can span all possible geometrical configurations.16Trajectory Space and Distance Measure: We set the length of trajectories T= 201 , which makesthe trajectory space X=R402. The distance measure is defined as: dX(x1, x2) :=||x1, x2||2.Group Operations: Recall that the symmetry group HisH:=p4m×SO(2), where p4misa specific type of wallpaper group and SO(2) is the group of 2×2rotation matrices. We de-notep4m:={(i, j)|i∈ {0,1}, j∈ {0,1,2,3}}, where i∈ {0,1}represents flipping andj∈ {0,1,2,3}represents nπ/2rotation. Throughout this section, we represent an SO(2) elementR=cosα−sinαsinα cosα∈SO(2)simply as α.Given two group elements ((a, b), α),((c, d), β)∈p4m×SO(2), the group operation is defined as:((a, b), α)((c, d), β) = (( mod(a+c,2),mod(b+ (−1)ad,4)), α+β), (10)where mod (x, y)denotes the remainder ofxy.Group Actions: The procedure of the group actions of H=p4m×SO(2)can be explained asfollows: (i) flip the robot over the wall axis (ii) rotate the robot around the originnπ2, and finally,(iii) rotated the robot and the wall. Given a task parameter τ= (q1r, q2r, ωw)and a group elementh= ((a, b), α), the group action h·τis then defined as:h·τ= (Rot(α+ωw+bπ2)∗flip(a)∗Rot(−ωw)∗(q1r, q2r), α+ωw) (11)where ∗denotes matrix multiplication, flip (a) :=1 00−1a, and Rot (α) :=cosα−sinαsinα cosα.Given a trajectory x={(q1i, q2i)}Ti=1, a task parameter τ= (q1r, q2r, ωw)and a group elementh= ((a, b), α), the group action h·(x, τ)is defined as :h·(x, τ) = ({Rot(α+ωw+bπ2)∗(q1i, q2i)}Ti=1, h·τ). (12)Group-Equivariant Map ̄h:Given a task parameter τ, ̄h(τ)can be divided into two elements, ̄h(τ) = ( ̄h1(τ), ̄h2(τ)), where ̄h1(τ) = ( ̄h11(τ), ̄h21(τ))∈p4mand ̄h2(τ)∈SO(2). Figure 14illustrates ̄h(τ), and its equivariance for ̄h1using two group actions hflip= (1,0,0)andhrot90=(0,1,0). It can be seen by the commutative diagram that ̄h(hflip·τ) =hflip ̄h(τ)and ̄h(hrot90·τ) =hrot90 ̄h(τ). The rest cases of flipping and rotating 90, 180, 270 degrees can be shown in thesame way. As shown, ̄h2(τ)is defined as ̄h2(τ) =ωw. The equivariance of ̄h2can be shown byhrot= (0,0, α)andτ= (q1r, q2r, ωw): ̄h(hrot·τ) = (q1r, q2r, ωw+α) = (0 ,0, α)(q1r, q2r, ωw) =hc ̄h(τ). (13)Equivariance for the case where h= (a, b, α )is then simply shown by dividing it into h=(0,0, α)(a, b,0): ̄h((a, b, α )·τ) = ̄h(((0,0, α)(a, b,0))·τ)= ̄h((0,0, α)·(a, b,0)·τ)= (0,0, α) ̄h((a, b,0)·τ)= (a, b,0)(0,0, α) ̄h(τ)= (a, b, α ) ̄h(τ). (14)Below equation is the formal definition of ̄hgiven a τ= (q1r, q2r, ωw): ̄h11(τ) =0,ifkπ2≤atan2 (q2r, q2r)−ωw<kπ2+π41,otherwise, ̄h21(τ) =0,if−π4≤atan2 (q2r, q2r)−ωw<π41,ifπ4≤atan2 (q2r, q2r)−ωw<3π42,if3π4≤atan2 (q2r, q2r)−ωw< πor−π≤atan2 (q2r, q2r)−ωw<−3π43,if−3π4≤atan2 (q2r, q2r)−ωw<−π4, ̄h2(τ) =ωw,(15)17Figure 14: Illustration of ̄hand its equivariance. hflip:= (1 ,0,0)is the flipping motion of themobile robot over the wall axis, and hrot90:= (0 ,1,0)is the rotation of the mobile robot 90 degreesaround the origin. It can be seen that hflip ̄h(τ) = (1 ,0,0)(0,1, ωw) = (1 ,mod(0−1,4), ωw) =(1,3, ωw) = ̄h(hflip·τ)andhrot90 ̄h(τ) = (0 ,1,0)(0,1, ωw) = (0 ,mod(1+1 ,4, ωw) = (0 ,2, ωw) = ̄h(hrot90·τ).where k∈ {− 2,−1,0,1}and(atan2 (q2r, q2r)−ωw)is assumed to be satisfying −π≤atan2 (q2r, q2r)−ωw< π.D.2.2 Experimental DetailsDatasets: For dataset generation, we first uniformly sample from the smallest space that can spanTby symmetry transformations, in which the robot’s initial position qrsatisfies 5≤ ||qr||<10and0≤atan2 (q2r, q11)< π/ 4, and the wall axis angle is 0. We collect trajectory data by generatingB-splines given via points labeled by humans. The B-splines are then reparameterized so that thetime length of the splines becomes 5 seconds, where the splines accelerate for the first second anddecelerate for the last second. We finally sample 201 points from the splines.For the training dataset, we have gathered 6 trajectories for 75 randomly given task parameters, atotal of 300 trajectories for training. For the validation dataset, we have gathered a trajectory for 80randomly given task parameters and randomly augmented them using symmetry transformations 100times. For the test dataset, we have gathered a trajectory for 40 randomly given task parameters andrandomly augmented them using symmetry transformations 1000 times. The number of validationand test datasets are then 8,000 and 40,000 repectively.Network Architectures and Training Details: A task parameter τis represented as (q1r, q2r, ωw),where (q1r, q2r)is the mobile robot’s initial position and ωwis the wall’s axis angle. In practicalimplementation, we use (q1r, q2r,cosωw,sinωw)∈R4as an input parameter vector. Since T= 201 ,the output space is R402.We use two-layer fully connected neural networks of 512 nodes for MMPs and EMMPs with eluas their activation function. TC-V AE’s encoder includes a fully connected network and a temporalconvolutional network, and the decoder includes two fully connected networks for zandτ, a tem-poral convolutional network, and a fully connected network. All four fully connected networks usedin TC-V AE are of two layers with size 434. The output sizes of fully connected networks for zandτin the decoder are 36 and 72 respectively. The two temporal convolutional layers in TC-V AE are18both with channel sizes (18,36,72)and kernel size 3. More details on the structure of TC-V AE arein [37]. All models in the experiments have similar number of parameters (≈9.4×105).Success Criterion: We consider a trajectory successful if it is consistent with the task parameter andreaches the goal without colliding with the wall. More specifically, we check (i) collision avoidance,(ii) the robot’s initial position, and the robot’s final position. We consider the trajectory satisfies (ii)and (iii) if the initial configuration and the final configuration are within a radius of 0.3at the initialposition specified in the task parameter and origin, respectively. The number of sample (z, τ)weuse for success rate calculation is 50,000.D.2.3 Additional resultsArchitecture Comparison: We compare MMPs and EMMPs of fully connected autoencoders (de-noted as AE), fully connected variational autoencoders (denoted as V AE), and variational autoen-coders of the same structure with TC-V AE (denoted as TC-V AE). Table 3 shows the four evaluationmetrics. Overall, as shown in the success rate scores, regardless of network architecture and autoen-coder method, EMMPs without regularization perform the best, and MMPs without regularizationperform the worst. Although EMMP (TC-V AE) excels in most measures (MI and NLL) its successrate (91.2%) is still lower than EMMP (AE)’s (92.40%) and EMMP (V AE)’s (95.72%), which iscaused by the tendency of (TC-V AE) that it violates the initial and final condition in about 6%oftrials, whereas EMMP (AE) only violates them and EMMP (V AE) almost never violate them (0%∼0.01%).Equivariance Comparison: Here, we qualitatively compare the equivariance performance of ran-dom data augmentation and equivariant learning method by comparing MMP (AE) and EMMP(AE). Figure 15 shows trajectories generated by τandh·τ, with same z. If the decoder fis equiv-ariant, f(z, h·τ)(blue lines in the figure) and [h·(f(z, τ), τ)]x(grey lines in the figure) must overlap.However, as shown in the Figure 15 Left, trajectories of the MMP do not , whereas trajectories ofthe EMMP perfectly overlap as shwon in Figure 15 Right .D.3 Water-Pouring ExperimentD.3.1 Formulas and ProofsTask Parameter Space: The input space Q=R2×SE(3)×[0.2,0.41]. A task parameter τcan berepresented as ((q1c, q2c),(q1b, q2b, q3b, Rb)), mw), where (q1c, q2c)is the cup’s position, (q1b, q2b, q3b, Rb))is the bottle’s initial position and orientation, and mwis the weight of water in the bottle. Toconstruct compact T, we limit (q1c, q2c)to be inside a square at the origin with edge length of 0.5,i.e.,−0.25≤q1c, q2c≤0.25, and limit the distance between (q1c, q2c)and(q1b, q2b)to satisfy 0.3≤||q1b−q1c, q2b−q2b||2≤0.78. Since the bottle is on the table upright, q3bis a constant.Trajectory Space and Distance Measure: We set the length of trajectories Tto be 480, whichmakes trajectory space X=SE(3)480. Given x1={xi1}480i=1andx2={xi2}480i=1, where each xijcanbe represented by (Rij∈SO(3), pij∈R3), the distance measure on Xis defined as:dX(x1, x2) :=sXi(||R−1i1∗Ri2−I||2F+γ||pi1−pi2||22), (16)Table 3: Reconstruction Error (RE), Mutual Information (MI), and Negative Log-Likelihood (NLL);the lower, the better. Success Rate (SR); the higher, the better.Method RE (↓) MI ( ↓) NLL ( ↓) SR (↑)MMP (AE) 0.223 0.487 1.49×10450.08%MMP (V AE) 0.233 0.687 1.57×10442.98%MMP (AE) + indep 0.225 0.329 1.48×10452.39%MMP (V AE) + indep 0.229 0.652 1.56×10444.28%EMMP (AE) 0.223 0.082 1.25×10492.40%EMMP (AE) + indep 0.229 0.077 1.24×10486.66%EMMP (V AE) 0.231 0.066 1.23×10495.72%EMMP (V AE) + indep 0.225 0.167 1.28×10482.02%EMMP (TC-V AE) 0.227 0.065 1.00×10491.20%EMMP (TC-V AE) + indep 0.247 0.071 1.15×10488.22%19Figure 15: Equivariance comparison between MMP (AE) and EMMP (AE). If the decoder fisequivariant, f(z, h·τ)(blue lines in the figure) and [h·(f(z, τ), τ)]x(grey lines in the figure) mustoverlap. It can be seen that the decoder of MMP is not equivariant, whereas the EMMP’ decoder isequivariant.where γ= 5is a constant.Group Operations: The symmetry group H=R2×SO(2)×SO(2), each is for translation of thecup and the bottle, rotation of the bottle around the cup, and rotation of the bottle around itself. Giventwo group elements (a, b, R c1, Rb1),(c, d, R c2, Rb2)∈R2×SO(2)×SO(2), the group operationis defined as follows:(a, b, R α, Rβ)(c, d, R γ, Rδ) = (a+c, b+d, Rα∗Rγ, Rβ∗Rδ). (17)Group Actions: The procedure of group actions of h∈Hcan be explained as follows: (i)translate the cup and the bottle, (ii) rotate the bottle around the cup, and (iii) rotate the bottlearound itself. Given a task parameter τ= ((q1c, q2c),(q1b, q2b, q3b, Rb)), mw), and a group elementh= (a, b, R α, Rβ), the group action h·τis defined as:h·τ=(q1c+a, q2c+b),((q1c+a, q2c+b,0) + ( Rα∗(q1b−q1c, q2b−q2c, q3c)), Rα∗Rb∗Rβ), mw. (18)Group-Equivariant Map ̄h:Given a task parameter τ= ((q1c, q2c),(q1b, q2b, q3b, Rb)), mw), ̄h(τ) =( ̄h1(τ), ̄h2(τ), ̄h2(τ))is defined as follows: ̄h1(τ) = (q1c, q2c)∈R2, (19) ̄h2(τ) =Rot(ˆz, θ1), (20) ̄h3(τ) =Rot(ˆz,(θ2−θ1)), (21)where θ1:=atan2 (q2b−q2c, q1b−q1c),θ2:=atan2 (ˆx2b,ˆx1b), and ˆxbdenotes the first column of Rb.Given an arbitrary h= (a, b, R α, Rβ), where Rα=Rot(ˆz, α)andRβ=Rot(ˆz, β), the equivarianceof ̄his shown by the following equation: ̄h(h·τ) = ̄h((q1c+a, q2c+b),((q1c+a, q2c+b,0) + ( Rα∗(q1b−q1c, q2b−q2c, q3c)), Rα∗Rb∗Rβ), mw)= ((q1c+a, q2c+b),Rot(ˆz, α+θ1),Rot(θ2−θ1+β)= ((q1c+a, q2c+b), Rα∗Rot(ˆz, θ1),Rot(ˆz, θ2−θ1)∗Rβ)= ((q1c+a, q2c+b), Rα∗Rot(ˆz, θ1), Rβ∗Rot(ˆz, θ2−θ1))= (a, b, R α, Rβ)((q1c, q2c),Rot(ˆz, θ1),Rot(ˆz,(θ2−θ1)))=h ̄h(τ). (22)D.3.2 Experimental DetailsDatasets: The water-pouring demonstration trajectories are collected by recording videos of water-pouring motions of a human demonstrator for 8 seconds (intended for 3.5 seconds of reaching mo-tion and 4.5 seconds of pouring motion) at 60fps, with three AprilTags, resulting in 480 frames [49].20Figure 16: Task parameters for demonstration. The cup is always at the origin, and the bot-tle is at (0, r∈0.3,0.38,0.54,0.62,0.78,0.145) . The mass of water in the bottle is mw∈0.2,0.27,0.305,0.34,0.41.Then we extract the SE(3) trajectories of the bottle, whose length T= 480 . We perform trajectorysmoothing and transform task parameters and the trajectories using group actions of Hfor the cup’sposition to be the origin, and the bottle to be initially in the ˆxs-direction from the bottle, and ˆxbtobe aligned with ˆxs. The resulting task parameters are in the form of ((0,0),(r,0,h, R,0)), mw).Assuming the bottle to be initially on the table upright, h = 0.145is constant, and R=I. We gather5 trajectories for 7 different rand 5 different mwin a total of 175 trajectories. As shown in Figure 16Left, we choose rat every 8cm, from 30cm to 78cm, i.e., r∈ {0.3,0.38,0.46,0.54,0.62,0.7,0.78},and as shown in Figure 16 Right we choose mw∈ {0.2,0.27,0.305,0.34,0.41}. The dimensions ofthe cup and the bottle, and the position of the bottle frame are as illustrated in Figure 16 right . Thefive demonstrations of each task parameter are intended to pour water gradually from the left side ofthe cup and the right side of the cup.We use 125 trajectories of r∈0.3,0.38,0.54,0.62,0.78as the training dataset, and we randomlysplit the other 50 trajectories into half for validation and test dataset. We randomly augment thedataset 100 and 1,000 times for validation and test dataset respectively, resulting in 2,500 validationtrajectories and 25,000 test trajectories.Network Architectures and Training Details: The space of the task parameters isR2×SE(3)×[0.2,0.41], where a task parameter can be represented in the form of((q1c, q2c),(q1b, q2b, q3b, Rb)), mw). Assuming that the bottle is initially on the table upright, sinceq3bis a constant variable and Rbcan be represented as Rot ˆz, θb, in practical implementation, we use(q1c, q2c, q1b, q2b, mw,cosθb,sinθb)∈R7for input of the decoder.The output of the model is an element in SE (3)480which is not a vector space. A naive parame-terization or SE(3) element (e.g. as a 12-dimensional vector) does not enforce the model outputs tosatisfy SE(3) constraints. To constraint the model output space to be SE(3)480, we first set all modeloutput sizes to be 480×6 = 2880 , and add an additional layer Vec2SE3 at the end of every decoder.Given a vector v= (v1, . . . , v6)∈R6, Vec2SE3 is defined as:Vec2SE3 :v7→exp("0−v3v2v3 0−v1−v2v1 0#)v4v5v60 1∈SE(3).We finally vectorize the first three rows of the SE(3) matrix, since the last row is constant at(0,0,0,1).We use two-layer fully connected neural networks of 168 nodes for the EMMP with elu as its activa-tion function. TC-V AE’s encoder includes a fully connected network and a temporal convolutionalnetwork, and the decoder includes two fully connected networks for zandτ, a temporal convolu-tional network, and a fully connected network. All four fully connected networks used in TC-V AEare of two layers with size 512. The output sizes of fully connected networks for zandτin thedecoder are 40 and 80 respectively. The two temporal convolutional layers in TC-V AE are both withchannel sizes (36,72,144) and kernel size 3. More details on the structure of TC-V AE are in [37].21Figure 17: Graphes of bottle angle vs. time. The pouring angle ωbis the angle between the bottleaxisˆzband the xy-plane. The orange lines are pouring angles for the mw= 0.41case, and the bluelines are pouring angles for the mw= 0.20case. It can be observed that the pouring angle decreasesas the mass of water increases.All models in the experiments have a similar number of parameters, where EMMP contains (1.51×106)parameters and TC-V AE contains (1.56×106)parameters.Task Parameters for Success Rate Measure: We sample five feasible trajectories for four taskparameters. Throughout the four task parameters, the cup’s position (q1c, q2c) = (−0.2,0), thebottle is initially in the y-direction from the cup, i.e., q1b=−0.2, the bottle is initially alignedwith the base frame, i.e., Rb=I. The rest parts, (q2b, mw)for the four task parameters are(0.35,0.25),(0.45,0.275),(0.40,0.35),(0.55,0.400) . These task parameters are picked within therobot’s workspace.Obstacle Avoidance Algorithm: Given a task parameter τand an obstacle, the obstacle avoidancetask is performed as follows: (i) we sample zfrom p(z), (ii) generate the bottle’s trajectories viaf(z, τ), (iii) check the collision between the bottle and the obstacle and pick collision-free trajecto-ries, and (iv) solve the inverse kinematics problem of the robot and choose one that is feasible andalso collision-free.We check collisions between the bottle and the obstacle and between the robot and the obstacle byconverting the meshes of the bottle and robot to point clouds, and parameterizing the obstacle as asuperquadric, which represents objects as a sign distance function [57]. As a sign distance function,superquadrics have benefits in checking if a point is inside or outside them. We consider a trajectoryof a point cloud and a superquadric to be collision-free if none of the points in the point cloud getsinside the superquadric at every timestep, and consider they collide otherwise.D.3.3 Additional resultsWater-Pouring Performance Comparison: The motions of the bottle pouring water near the cupare highly dependent on the amount of water in the bottle. A bottle of small water needs to be tiltedmore than a bottle that is almost full to pour the same amount of water into the cup. The amount oftilting of the bottle can be captured in the angle between its axis and the table, which we denote asthe bottle angle.Figure 18: f(z, τ)(blue) and f(z, h·τ)(orange), and [h·(f(z, τ), τ)]x(apricot). [h·(f(z, τ), τ)]xandf(z, h·τ)should be overlapped if the trajectories are generated equivariantly with the taskparameters.22Figure 17 illustrates bottle angle mean and standard deviation graphs of demonstration trajectoriesof the training dataset ( Left), generated trajectories of EMMP + indep ( Middle ) and generated tra-jectories of TC-V AE ( Right ) with mw= 200 g (blue) and mw= 410 g (orange). We randomlyaugment 50 task parameters of validation and test datasets 20 times, and pick 1,000 task parametersformw= 200 g and 1,000 task parameters for mw= 410 g. We generate 1,000 trajectories for bothcases using zsampled from p(z).Figure 17 Leftshows that as the mass of water increases, the pouring angle increases, which meansthe bottle is tilted less. It can be seen that the minimum mean angles of EMMP for mw= 200 gandmw= 410 g (-1.6 degrees and -9.5 degrees) are very much alike that of the demonstrationtrajectories (-1.5 degrees and -8.5 degrees). On the other hand, the minimum mean angles of TC-V AE (5.6 degrees and -3.1 degrees) are very much distant from the demonstration trajectories’.Equivariance Comparison: For a motion manifold primitive framework to be equivariant, decodedtrajectories must equivariantly transform as task parameters undergo a symmetry transformation.We qualitatively compare the equivariance performance of random data augmentation method andequivariant learning method by comparing TC-V AE and EMMP + indep.Figure 15 Leftvisualizes two trajectories generated from τandh·τ, where his the rotation of thebottle around the cup and itself, without translation. If the model is equivariant, the orange-coloredbottle and the apricot-colored bottle in the left upper corner should overlap. However, the conditionis not satisfied for TC-V AE, whereas the orange trajectory of EMMP is equivariantly transformedwithτ.23 |
W8MjsxHrDpL | Synthesizing Navigation Abstractions forPlanning with Portable Manipulation SkillsEric Rosen∗Steve James†Sergio Orozco∗Vedant Gupta∗Max Merlin∗Stefanie Tellex∗George Konidaris∗Abstract: We address the problem of efficiently learning high-level abstrac-tions for task-level robot planning. Existing approaches require large amountsof data and fail to generalize learned abstractions to new environments. To ad-dress this, we propose to exploit the independence between spatial and non-spatialstate variables in the preconditions of manipulation and navigation skills, mirror-ing the manipulation-navigation split in robotics research. Given a collection ofportable manipulation abstractions (i.e., object-centric manipulation skills pairedwith matching symbolic representations), we derive an algorithm to automaticallygenerate navigation abstractions that support mobile manipulation planning in anovel environment. We apply our approach to simulated data in AI2Thor and onreal robot hardware with a coffee preparation task, efficiently generating plannablerepresentations for mobile manipulators in just a few minutes of robot time, sig-nificantly outperforming state-of-the-art baselines.Keywords: Learning Abstractions, Mobile Manipulation1 IntroductionPlanning for mobile manipulation is difficult because of its long-horizon nature. There are two ap-proaches to addressing this difficulty: subtask decomposition and structural decomposition. Theformer approach decomposes the problem into smaller subtasks (e.g: hierarchical planning [1, 2]),and leverages abstractions in two forms: action abstractions, also called skills , which package mo-tor behaviors into a single invokable action, and perceptual abstractions, typically represented asgrounded symbols , which compactly represent the relevant aspects of task state. Learned abstrac-tions can address complex planning problems [3], but existing approaches are sample inefficientbecause they do not exploit structure present in the robot and the world. The second approach—structural decomposition—aims to design algorithms that do just that. Navigation stacks typicallyfocus on building maps and localizing a robot in a map [4, 5], and using those maps to navigate to agoal via path planning [6]. Research in robotic manipulation structures the task of effectively inter-acting with objects [7] into component algorithms such as object recognition [8], interactive percep-tion [9], grasp synthesis [10], kinematic motion planning [11], and learning for manipulation [12].This approach can produce algorithms that generate useful behavior while avoiding learning entirely.We propose to combine these two complementary approaches by exploiting structural assumptionsto efficiently learn high-level abstractions. We begin by splitting abstractions to do with manipu-lation from those to do with navigation. Manipulation abstractions are expensive to learn but aretypically object-centric and therefore portable, while navigation abstractions are notportable: howthe robot should abstract its map pose and navigate between locations depends on the specifics of asingle scene. Efficiently learning the navigation components of the abstraction, which must be re-learned for each task, is thus critical. We therefore assume a given (pre-learned or hand-constructed)∗Brown University†University of the Witwatersrand7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.(a) An Action-Oriented Semantic Map for a coffeepreparation task.(b) Spot executing portable manipulation skills incoffee preparation task. Given a new environmentwith these objects, our approach efficiently con-structs the navigation abstractions—both actionand state—to support planning using these skills.Figure 1: An AOSM for a coffee preparation task. (a) The underlying semantic map consists of a3D point cloud of the scene (black points) along with the detected pose and attributes of objects.(b) Given a set of portable manipulation skills (start top left clockwise: pouring water, picking up acup, placing a cup, and pushing a brewing button), an AOSM also includes a distribution over poseswhere the robot can execute each skill (visualized by colored areas in map (a)).set of portable manipulation abstractions (both skills and symbols), and consider how to efficientlygenerate the navigation abstractions that support planning with them in a novel environment.Our key insight is that spatial and non-spatial state variables typically contribute independently towhether a motor skill can be executed; and that under those conditions, a unique data structure—anAction-Oriented Semantic Map (AOSM) [13] (Figure 1a), which encodes the spatial locations fromwhich manipulation skills can be executed—is necessary and sufficient to generate all the navigationabstractions required to support manipulation planning. We provide an algorithm to autonomouslyand efficiently construct an AOSM from a given set of manipulation skills using well-establishedmapping and path planning algorithms; a robot can thereby complete its abstract representationof a new task by constructing its navigation components in just a few minutes of robot time. Weevaluate our approach in both simulation (using AI2Thor [14]) and on real robot hardware (a BostonDynamics Spot). In simulation, our approach decreases the number of interactions required to learnnavigation abstractions by an order of magnitude compared to the state of the art, and enables therobot to transfer learned symbols to new environments. On real robot hardware, our system generatesa representation of a coffee-making task for two different kitchen environments in a few minutes.2 BackgroundWe adopt the Markov Decision Process (MDP) formalism for modeling agent decision-making.MDPs model the agent’s environment as a tuple (S, A, T, R, γ ), where Sis the set of states, Ais theset of low-level actions, Tis a transition function describing the environment dynamics, T(s′|s, a),Ris the reward function that expresses real-valued rewards, R(s, a, s′)andγ∈[0,1]is the discountfactor. A policy π(a|s)determines the probability of an action abeing executed in state s. Solvingan MDP is equivalent to finding the policy that maximizes the sum of discounted future rewards:Jπ(s) =EP∞i=0γiR(si, ai, si+1).Abstract Representations An abstract action set can reduce the problem diameter of solving anMDP by leaving lower-level controllers to resolve repeated subtasks. The options framework [15]2is the most popular abstract action framework. An option ois a tuple with three components: anoption policy πo; an initiation set Io⊆Sthat identifies low-level states the option policy can beexecuted from; and a termination condition βo(s)→[0,1]that determines which states cease policyexecution.An advantage of using abstract actions (or motor skills) is that they need not necessarily be functionsof the full problem state. For example, a motor skill for walking can just the robot’s local perception,rather than an entire map. In such cases we model the option components as depending on someobservation space Dobtained using sensor model φ(S)→D, and refer to the option as beingportable since it can be reused in several places in a task, and in new tasks [16, 17].We are interested in learning an abstract representation that facilitates planning. A probabilistic planpis sequence of (potentially abstract) actions to execute from states sampled from a distributionZ:pZ={o1, ..., o pn}. A suitable representation for planning must enable the agent to correctlyevaluate the probability of a plan. Konidaris et al. [3] proved that it is necessary and sufficientto learn when an option can be executed (known as the preconditions) and what the result of ex-ecuting an option is (known as the image operator). Computing the image operator for arbitraryoptions is challenging; however, it is tractable for a subclass of subgoal options [18]. A subgoaloption’s resulting state distribution after executing the policy is independent of the starting state, soPr(s′|o, s) =Pr(s′|o). Therefore, computing the entire image operator can be substituted withrepresenting the effect of executing the option (the distribution over states the agent will be in af-ter executing the option), Effect (o). An option that only modifies a subset of state variables—itsfactors – induces an abstract state space expressible using a classical planning representation likePDDL [3]. In this formulation, preconditions and effects can be represented by propositional sym-bols (which constitute an abstract state space), and actions are expressed as operators over thosesymbols. With an object-centric state-space, the learned symbols can instead be predicates parame-terized by object types [19].A two stage approach is used to learn a portable symbolic vocabulary and generate a forward modelfor a set of portable skills. First, symbols for the portable options are learned over the observationspace Din a training environment, then the portable options are partitioned in a test environment tomake them subgoal in both SandD. We defer the details of this process to James et al. [17, 20], andnote that we use a similar approach for constructing our portable symbolic vocabulary. However, thisformulation can take several hours and over a hundred skill executions to learn a representation fora simple task [3, 21, 22]; our main contribution is defining and leveraging the spatial independenceproperty to make learning abstractions much more efficient.Related Work Our work focuses on learning state abstractions that enable long-horizon task plan-ning by leveraging manipulation skills and semantic maps, similar to Task and Motion Planning(TAMP) frameworks. However, our work differs from TAMP based on the assumptions we make:Rather than use motion planning to generate manipulation behaviors, we treat manipulation skillsas black-box skills that can be implemented with or without motion planning (e.g: learned motorpolicies [23]), and only require a model of the environment to support path planning for locomotion,which is readily accessible using off-the-shelf SLAM.TAMP solutions integrate high-level task planning with low-level continuous motion planning to ex-ploit a planning hierarchy where different specialized planning and learning algorithms can exploitthe structure present at each level [24] and across modes [25]. However, whereas standard TAMPapproaches assume access a given state abstraction is sound for a particular task [26, 24], we for-malize an independence property between spatial and non-spatial state variables to more efficientlylearn a sufficient representation for planning with given manipulation skills. Most similar to ourwork are TAMP approaches that leverage semantic maps for improving task and motion planning.Galindo et al. [27] investigate how semantic maps can act as a hybrid knowledge base for TAMP inthe context of navigation. This work also uses a semantic map to improve task planning, but onlyextracts additional information from a semantic map, whereas we identify a specific augmentationto a semantic map that is provably sufficient for supporting manipulation planning. Our work is3also related to approaches that leverage Large Language Models (LLMs) for task planning. Theseapproaches [28, 29] generally assume the existence of a preprocessed map that enables navigationto support manipulation. Our work here formalizes this data structure and lays the theoretical foun-dations for how this specific data structure can not just be used in task planning with LLMs, but alsofor learning symbols for task planning.3 Exploiting Spatial Independence for Learning AbstractionsProblem Definition We are interested in the problem of a robot that must navigate an environmentand manipulate objects to achieve a goal. To this end, we represent the decision problem as anMDP, and factor the state s∈Sinto the state of the robot Srand the state of the environment Se:S=Sr×Se. Furthermore, the state of the environment can be factored into a discrete set of qobjects (or entities) the robot may manipulate, SΩ= Ω 1×...×Ωq, and a map of the environmentm∈M,Se=M×SΩ. This structured representation of the environment is often called a semanticmap [14]. Since the robot and all of the objects exist in a physical space, they each have a pose inthe map. Therefore, we factor the state of the robot Srinto some pose Sbin the map and any otherinformation describing the state of the robot S′r:Sr=Sb×S′r, and similarly for each object Ωi∈Ω:Ωi= Ωbi×Ω′i. The task-specific semantic map defines a constraint function on the feasible posesof the robot, and can be used in conjunction with a path planner N(sb, s′b)to generate trajectoriesthrough the space of robot poses Sbfrom a start state sbto a set of goal states s′b(i.e: locomote therobot around the scene).Given the above setting, our problem is formalized follows. For a given set of portable manipulationoptions Oand a semantic map Se, we must take plans that consist only of manipulation actions(called a manipulation-only plan pO={o1, ..., o po},∀i∈ {1, ..., p o}, oi∈O, where pois thelength of the plan pO), and learn a portable abstract representation that supports generating task-specific navigation behaviors based on Sethat can be interleaved into the manipulation-only plan tomake the probability of success non-zero. Note that even though the state space is fully observable,it crucially does not include information about what configurations in space afford manipulation,which is what our approach learns.Figure 2: An example figure of a robot iteratively constructing an AOSM in a novel environment.(Left): The robot has a partial map of the environment and has not seen any objects. (Middle): Therobot moves around to construct more of the map, and the vision model identifies a cup (positionvisualized as red circle). (Right): The robot uses a learned navigation symbol to sample a pose topick the cup, and then navigates to that pose in order to execute the manipulation skill.Approach Our approach is based on autonomously constructing an Action-Oriented SemanticMap (AOSM) [13] and using it for task planning. Formally, an AOSM (O, S e,(V, E))is a datastructure where Ois a set of kportable manipulation options, Seis a semantic map, and (V, E)is a topological graph. The topological graph (V, E)is an undirected graph that contains knodesV={v1, ..., v k}, where each node vjrepresents a region of configuration space for the base ofthe mobile manipulator (i.e: each node vjrepresents a set of poses in the semantic map). Node4vjcorresponds to the set of poses in the semantic map that have a non-zero probability of beingin the initiation set of option oj. So, vj={p∈Ioj|p∈m}. The node vjis also referred to as anavigation symbol σojfor the option oj, since a symbol is a probabilistic binary classifier for testingmembership of a set, and this symbol only depends on whether the robot’s configuration is withina specific region of space that is relevant for navigation (discussed in more detail below). An edgee= (va, vb)∈Erepresents that a motion planner N(va, vb)can be used to successfully navigatefrom the set of poses in vato the set of poses represented by vb. AOSMs were introduced in Rosenet al. [13], where they were hand-crafted by a user. Here, we assume access to a set of portablemanipulation skills Oand the semantic map Se, and we provide a novel algorithm for learning thetopological graph (V, E)that consists of the navigation symbols and edge connectivity betweenthem, which together define an AOSM.When a robot has access to an AOSM, it can sample poses in the map that enable the robot to exe-cute its manipulation skills (Figure 2). When the navigation symbols are learned in an object-centricspatial frame (i.e: the regions of space are in an object-centric frame instead of a map frame), theycan be ported to new environments by grounding to global poses based on the known poses of theobjects in the semantic map Se. Once an AOSM has been constructed, given a manipulation-onlyplanpO={o1, ..., o po},∀i∈ {1, ..., p o}, oi∈O, a starting base pose S0b, and a path plannerN(sb, s′b), we can use the AOSM to sample poses from the navigation preconditions of each manip-ulation option {S1b, ..., Spob},∀Sib∼σoi, and leverage the the path planner to synthesize a sequenceof locomotion path plans pN={n1, ..., n po−1}, ni∼N(Si−1b, Sib)that can be interleaved intothe manipulation plan pO,pO′={o1, n1, o2, n2, ..., , o po−1, npo−1, opo}. This augmented plan hasthe requisite additional actions required to make the manipulation-only plan feasible in the specificmap the robot finds itself in. An AOSM can only can be used when it is possible to decomposeinitiation sets into navigation and manipulation preconditions. We now prove this assumes a crucialindependence property of the factors of the initiation set, which we formally describe in the rest ofthis section.First, note that we can define navigation symbol as a symbol σwhose factors (the set of statevariables the grounding classifier depends on) are the robot’s mobile base state variables Sb,Factors (σ) =Sb(we call this type of factor a spatial factor). To determine whether a state vari-able is in the factor associated with the initiation set of a manipulation option (i.e: the state variableis a defining state variable for that set of states), we can use the notion of projection . The projectionof a list of state variables vout of a set of states Xis defined as Proj (X, v) ={s|∃x∈X, s[i] =x[i],∀i /∈v}, which removes any restrictions on the values of the state variables vfor the states inX. If we project out a state variable from a set of states and it changes the set of states, we saythat the state variable is a defining state variable for that set of states (since deciding whether a stateis a member of X depends on a restricted value for v). If that set of states is the initiation set Ioof an option o, then that collection of state variables is by definition the factors of Io, Factors (Io).In this case, the set of states describing the initiation set can be described by the intersection ofindependent state sets [3]. Formally, we say a factor fsis independent in the initiation set Iowhen:Io=Proj(Io,Factors (Io)/fs)∩Proj(Io, fs).With this definition, we now define the spatial inde-pendence property:Definition 3.1 (Spatial Independence) .The initiation set Iofor an option o’s has the spatial inde-pendence property if:Io=Proj(Io,Factors (Io)/Sb)∩Proj(Io, Sb). (1)Note that when learning a probabilistic symbolic representation, the sets are replaced with distribu-tions and the intersection is replaced with multiplication, and therefore the independence propertyis defined exactly as conditional independence. When an option’s initiation set has the spatial inde-pendence property, we can construct an independent symbol to represent Proj (Io,Factors (Io)/Sb)which by definition is a navigation symbol since it it only depends on Sb. Intuitively, this projectionrepresents the set of base locations the robot must be in order to successfully execute the option5Figure 3: Results for our experiments on transferability of learning abstractions (left and right aresingle-scene setting/multi-scene setting respectively). We report the cumulative number of sampledlocations that manipulation actions are attempted from against the average cumulative number oftimes the agent has successfully completed the plan (bars are standard error across 5 seeds.)owithout regards to the state of the rest of the world.3Since an AOSM captures the navigationsymbols, then when the spatial independence property holds for an option, an AOSM is a necessaryand sufficient characterization of the spatial components of the initiation set. We leave details ofthe formal conditions under which we will resolve a manipulation option ofor some set of start-ing states Zin the supplementary material. With an AOSM, given a manipulation-only plan, wecan synthesize the requisite navigation actions to interleave into the plan and support execution. Toevaluate the probability of the entire plan, we first learn a portable symbolic vocabulary similar toJames et al. [17] (described in Section 2) but do not include spatial information about the objectsor robot in the observations, and then separately learn navigation symbols using the spatial data inan object-centric frame. With the portable symbolic vocabulary, manipulation-plans can be gener-ated, and with the addition of the navigation symbols grounded for a specific environment, we canevaluate the probability of a manipulation-only plan with navigation actions interleaved in.4 Simulation and Hardware ExperimentsWe test the hypothesis that exploiting the spatial independence property of manipulation optionsincreases sample efficiency and transferability of learned abstractions. First, we investigate the ef-fect of leveraging the spatial independence assumption on the number of samples required to learna useful set of abstractions for planning. Secondly, we evaluate the effectiveness of transferringabstractions from a training environment to a novel environment. Together, these experiments high-light how AOSMs can be used to efficiently learn and transfer abstractions with only a few numberof interactions with the environment.Coffee Preparation Task We conduct both of our experiments in a simulated mobile manipulationdomain, AI2Thor [14], using a coffee preparation task in 15 virtual kitchens. In this task the robotmust navigate through a large simulated kitchen and manipulate objects; to successfully make coffee,it must pick up a cup, bring it to a coffee machine, turn on the coffee machine to make the beverage,and then pick up the prepared coffee mug. We assume the robot has access to a set of portable manip-ulation skills ( PickUp(Mug) ,ToggleOn(CoffeeMachine) ,PutIn(Mug,CoffeeMachine) ,Make-Coffee(Mug,CoffeeMachine) ) that can be reused across different kitchen scenes, but that the agentmust construct navigation abstractions for each different scene. AI2Thor provides semantic mapsof each scene, which include a 2D occupancy grid of the environment, the number of objects in3We note that this assumption may be violated in realistic domains (for example, the location of objectsmay constrain what locations the robot can execute a manipulation option from), but we later discuss how wecan still use an AOSM to synthesize effective navigation abstractions even when this assumption is not met.6the environment, their object type and attributes, and their pose. We use 77different objects, eachcharacterized by a vector of length 108. We also include the 3D position and 1D yaw of the robot’sbase ( 4additional state variables), resulting in a low-level observation vector of 8320 elements.Simulation Experiment: Spatial Independence for Learning Symbols In the first experiment,our goal is to evaluate how leveraging the spatial independence assumption affects the samplesrequired to construct a symbolic vocabulary that supports planning. We therefore evaluate a state-of-the-art baseline [19] for learning symbols that does not incorporate the spatial independenceassumption against an augmentation of the approach that does leverage the spatial independenceassumption. We report performance as a function of the number of samples from the environment.Part of the model learning process requires identifying which factors are independence since thereis no a priori assumption about the structure of the initiation and effect sets of the skills. Partitioningis done via DBSCAN clustering [30], and the precondition classifiers are learned using a SVM [31]with an RBF kernel (hyperparameters are optimized using grid search. The effect density estimationis performed with a kernel density estimators [32, 33] with a Gaussian kernel, with a grid searchover the bandwidth.Figure 4: Learning symbols for the coffee prepa-ration task, without the spatial independence as-sumption (James et al. [19]) and with the spatialindependence assumption (AOSM). We report thenumber of sampled interactions with the environ-ment against the planning success rate across 10seeds.Approaches We use a codebase for learningsymbols [19] that is state-of-the-art but does notleverage any spatial independence assumptionsas our baseline. More details on the algorithmcan be found in [19], but in summary: the robotcollects transition data in an environment by ei-ther randomly navigating to a pose or choosingmanipulation skills to execute, and then usesthis data to learn a model describing the precon-ditions and effects of the skills via a partition-ing and clustering process. Part of the modellearning process requires identifying which fac-tors are independent since there is no a prioriassumption about the structure of the initiationand effect sets of the skills. Details on the learn-ing can be found in the supplementary material.Metrics To evaluate the usefulness of the re-sulting abstractions, we use Fast Downward[34], an off-the-shelf symbolic planner, to planusing the resulting symbolic vocabulary. Wethen use a binary metric to determine how use-ful the representation is for planning: if the re-sulting plan accomplishes the goal, then the symbolic vocabulary is deemed successful. Otherwise,the symbolic vocabulary is deemed a failure. Our goal is to minimize the interactions required tolearn a successful symbolic vocabulary for planning. We collect 1000 transitions with 10 differentrandom seeds.Results The results of our experiment are in Figure 4. As the number of environmental samplesincreases, the success rate of planning with the symbols improves for both approaches, as expected.Learning with the spatial independence assumption, however, is able to learn a successful symbolicvocabulary with a nearly 100% planning success rate with about 50samples, where as the baselineapproach that does not leverage the spatial independence requires about 300samples. This is due inpart to the fact that, without leveraging the spatial independence assumption, the baseline requiresmore samples to learn to disentangle spatial information from non-spatial information, which ischallenging since the spatial data is continuous. Our approach builds in the disentanglement betweenthe spatial and non-spatial data, easing learning. These results demonstrate that our approach—7which structures in the independence assumption—is more sample efficient than state-of-the-artapproaches to learning abstractions. Examples of the learned symbolic vocabulary are in Figure 5.Figure 5: Example operators for two manipulation skills with the navigation symbols injected intothe preconditions (red highlight). (Left): A learned operator for the PickUp(Mug) skill in AI2Thor.Symbols are renamed manually to provide human interpretability (Right): A hand-specified operatorfor the PutIn(Mug,CoffeeMachine) skill in the Spot experiment.Simulation Experiment: Transfer of Learned Abstractions In the second set of experiments,we evaluate how AOSMs help transfer learned abstractions to novel environments. We provided amanipulation-only plan that prepares coffee, and the robot constructs the navigation symbols thatenable it to generate supporting navigation behaviors. There are two important design choices whenlearning navigation symbols that can be chosen independently of each other: 1) which spatial frameare the navigation symbols learned in, and 2) what proposal distribution is used for rejection sam-pling. We evaluate different choices of these design choices in two settings: one where the robotlearns symbols in a single scene, and one where it must learn symbols across different scenes (i.e:transfer is necessary). For each task execution in a scene, we report the cumulative total number ofmanipulation skills the robot executed, until the plan succeeded. Our results can be see in Figure 3,and full details of our experiments can be found in the Supplementary material. The main takeawayis that learning symbols in an object-centric frame is important for transferability.Robot Hardware Demonstration We demonstrate the effectiveness of AOSMs by executinga coffee preparation task on a Boston Dynamic Spot platform (Figure 1b). In this task, therobot must gather coffee grinds and water, pour them both into a coffee maker, close the lidof the coffee maker, and push a button to turn it on. We supply the robot with a set ofportable manipulation skills PickUp(CoffeeGrinds), PickUp(WaterCup), Place(CoffeeGrinds),Place(WaterCup), Pour(WaterCup), Pour(CoffeeGrinds), CloseLid(CoffeeMachine) andPushButton(CoffeeMachine) , whose implementation on the robot can be seen in Figure 1b. Theobjects are scattered around the room, requiring the robot robot to navigate the environment cor-rectly to successfully execute the manipulation skills. Images and full details on the robot hardwaredemonstration and evaluation can be found in Supplementary material.5 LimitationsWhile our approach leverages spatial structure to make learning abstractions for mobile manipulatorsmore efficient, several of the input assumptions limit generality. Namely, our approach assumes afully observable environment so the semantic map must be created before learning occurs. Futurework will investigate learning in partially-observable environments, handling skill repertories thatare continuously parameterized, and operating in highly-dynamic and unstructured environmentslike outdoors.6 ConclusionWe have introduced the spatial indepence property, and proven how it can be used to more effecientlylearn navigation abstractions by building an Action-Oriented Semantic Map. Once a robot has builtan AOSM, it can find and execute long-horizon mobile manipulation plans; in our work, a real robotwas able to construct the relevant navigation abstractions using just several minutes of data. Ourresults offer a promising path to enabling real robots to learn task-level abstractions in practicalamounts of time, a capability critical for complex, goal-directed behavior.8References[1] P. Bercher, R. Alford, and D. H ̈oller. A survey on hierarchical planning-one abstract idea,many concrete realizations. In IJCAI , pages 6267–6275, 2019.[2] S. Pateria, B. Subagdja, A.-h. Tan, and C. Quek. Hierarchical reinforcement learning: Acomprehensive survey. ACM Computing Surveys (CSUR) , 54(5):1–35, 2021.[3] G. Konidaris, L. P. Kaelbling, and T. Lozano-Perez. From skills to symbols: Learning symbolicrepresentations for abstract high-level planning. Journal of Artificial Intelligence Research , 61:215–289, 2018.[4] H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping: part i. IEEE robotics& automation magazine , 13(2):99–110, 2006.[5] J. Aulinas, Y . Petillot, J. Salvi, and X. Llad ́o. The slam problem: a survey. Artificial IntelligenceResearch and Development , pages 363–371, 2008.[6] T. T. Mac, C. Copot, D. T. Tran, and R. De Keyser. Heuristic approaches in robot path planning:A survey. Robotics and Autonomous Systems , 86:13–28, 2016.[7] M. T. Mason. Toward robotic manipulation. Annual Review of Control, Robotics, and Au-tonomous Systems , 1:1–28, 2018.[8] A. Billard and D. Kragic. Trends and challenges in robot manipulation. Science , 364(6446),2019.[9] J. Bohg, K. Hausman, B. Sankaran, O. Brock, D. Kragic, S. Schaal, and G. S. Sukhatme. Inter-active perception: Leveraging action in perception and perception in action. IEEE Transactionson Robotics , 33(6):1273–1291, 2017.[10] J. Bohg, A. Morales, T. Asfour, and D. Kragic. Data-driven grasp synthesis—a survey. IEEETransactions on Robotics , 30(2):289–309, 2013.[11] S. M. LaValle. Planning algorithms . Cambridge university press, 2006.[12] O. Kroemer, S. Niekum, and G. Konidaris. A review of robot learning for manipulation:Challenges, representations, and algorithms. arXiv preprint arXiv:1907.03146 , 2019.[13] E. Rosen, N. Kumar, N. Gopalan, D. Ullman, G. Konidaris, and S. Tellex. Building plannablerepresentations with mixed reality. In 2020 IEEE/RSJ International Conference on IntelligentRobots and Systems (IROS) , pages 11146–11153. IEEE, 2020.[14] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, D. Gordon, Y . Zhu,A. Gupta, and A. Farhadi. AI2-THOR: An interactive 3d environment for visual AI. arXivpreprint arXiv:1712.05474 , 2017.[15] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for tem-poral abstraction in reinforcement learning. Artificial intelligence , 112(1-2):181–211, 1999.[16] G. D. Konidaris and A. G. Barto. Building portable options: Skill transfer in reinforcementlearning. In IJCAI , volume 7, pages 895–900, 2007.[17] S. James, B. Rosman, and G. Konidaris. Learning portable representations for high-level plan-ning. In International Conference on Machine Learning , pages 4682–4691. PMLR, 2020.[18] D. Precup. Temporal abstraction in reinforcement learning . University of MassachusettsAmherst, 2000.[19] S. James, B. Rosman, and G. Konidaris. Autonomous learning of object-centric abstractionsfor high-level planning. In International Conference on Learning Representations , 2022.9[20] S. James, B. Rosman, and G. Konidaris. Autonomous learning of object-centric abstractionsfor high-level planning. In International Conference on Learning Representations , 2021.[21] B. Ames, A. Thackston, and G. Konidaris. Learning symbolic representations for planningwith parameterized skills. In 2018 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 526–533. IEEE, 2018.[22] N. Gopalan, E. Rosen, G. Konidaris, and S. Tellex. Simultaneously learning transferable sym-bols and language groundings from perceptual data for instruction following. Robotics: Sci-ence and Systems XVI , 2020.[23] B. Abbatematteo, E. Rosen, S. Tellex, and G. Konidaris. Bootstrapping motor skill learningwith motion planning. In 2021 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 4926–4933. IEEE, 2021.[24] L. P. Kaelbling and T. Lozano-P ́erez. Hierarchical task and motion planning in the now. In2011 IEEE International Conference on Robotics and Automation , pages 1470–1477. IEEE,2011.[25] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual review of control, robotics, and autonomoussystems , 4:265–293, 2021.[26] J. Wolfe, B. Marthi, and S. Russell. Combined task and motion planning for mobile manipu-lation. In Twentieth International Conference on Automated Planning and Scheduling , 2010.[27] C. Galindo, J.-A. Fern ́andez-Madrigal, J. Gonz ́alez, and A. Saffiotti. Robot task planning usingsemantic maps. Robotics and autonomous systems , 56(11):955–966, 2008.[28] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691 , 2022.[29] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch,Y . Chebotar, et al. Inner monologue: Embodied reasoning through planning with languagemodels. arXiv preprint arXiv:2207.05608 , 2022.[30] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A density-based algorithm for discoveringclusters in large spatial databases with noise. In kdd, volume 96, pages 226–231, 1996.[31] C. Cortes and V . Vapnik. Support-vector networks. Machine Learning , 20(3):273–297, 1995.[32] M. Rosenblatt. Remarks on some nonparametric estimates of a density function. The Annalsof Mathematical Statistics , 27(3):832, 1956.[33] E. Parzen. On estimation of a probability density function and mode. The Annals of Mathe-matical Statistics , 33(3):1065, 1962.[34] M. Helmert. The fast downward planning system. Journal of Artificial Intelligence Research ,26:191–246, 2006.10 |
FRKBdXhkQE0 | FastRLAP: A System for Learning High-SpeedDriving via Deep RL and Autonomous PracticingKyle Stachowicz†, Dhruv Shah†, Arjun Bhorkar†, Ilya Kostrikov, Sergey LevineUC BerkeleyAbstract: We present a system that enables a 1/10th-scale autonomous car to driveat high speeds from visual observations using reinforcement learning (RL). Oursystem, FastRLAP ( faster lap ), trains autonomously in the real world, withouthuman interventions, and without requiring any simulation or expert demonstra-tions. FastRLAP integrates several components to facilitate the learning process:we initialize low-dimensional visual representations from a similar reinforcementlearning objective applied to a large offline navigation dataset from other robots,providing a navigation-relevant representation. Given a series of checkpoints rep-resenting a driving course, we then use sample-efficient online RL to learn a fastdriving policy, resetting automatically on collision or failure. Perhaps surpris-ingly, our system can learn to drive over a variety of racing courses with less than20 minutes of online training. The resulting policies exhibit emergent aggressivedriving skills, such as timing braking and acceleration around turns and avoidingareas which impede the robot’s motion, approaching the performance of a humandriver using a similar first-person interface over the course of training.Keywords: reinforcement learning, offroad driving, vision-based navigationFigure 1: Fast reinforcement learning via autonomous practicing. By pre-training to learn task-relevantvisual features (Stage 1), and deploying our autonomous practicing framework for continuous online improve-ment (Stage 2), the robot can autonomously navigate between sparse checkpoints (blue), recover from collisions(red) and improve its driving behavior to maximize speed (yellow →magenta). FastRLAP learns fast drivingpolicies in as little as 20 minutes. Videos available at https://sites.google.com/view/fastrlap .1 IntroductionHigh-speed vision-based navigation presents a range of challenges, requiring a policy that canaccount for both the vehicle’s dynamics and interactions with the terrain and obstacles (Fig. 1).Learning-based methods offer a particularly appealing approach to such challenges, as they can inprinciple capture arbitrary high-performance driving behaviors while accounting for visual indica-tors. Some prior work has approached similar problems via imitation learning, acquiring end-to-endskills from expert demonstrations [1, 2]. However, if we aim to maximize performance, we might7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 2: FastRLAP learns fast driving policies for a 1/10th-scale vehicle operating in diverse indoor andoutdoor environments with challenging terrain in tens of minutes, using offline pre-training and online RL.instead prefer to directly adapt the driving strategy to the vehicle autonomously . In principle, rein-forcement learning allows an agent to continually improve based on its experience, as shown previ-ously in board games and robot manipulation where RL can even exceed human performance [3–6].However, in practice, learning autonomous navigation with RL presents major challenges. Becausewe cannot reset the system to a random state, the learning process is highly dependent on the sys-tem’s ability to continually reach new states without human intervention. Instead, the RL-basedsystem should train without supervision, while smoothly recovering from failures or collisions.Furthermore, directly learning from high-dimensional observations in the real world can be pro-hibitively slow. Because features are learned from a very weak signal (reward), RL often requires ahuge number of interactions with the environment to learn a robust policy. Alternatively, we couldlearn entirely from offline data [7, 8], but this yields suboptimal policies when the desired behavior(aggressive driving) is not included in the dataset (low-speed navigation). The goal of this paper isto address these challenges and understand how RL can be applied to learn high-speed driving fromvision in the real world. We design a system for Fast R einforcement Learning via AutonomousPracticing ( FastRLAP ) which mitigates the sample complexity challenges by first learning a low-dimensional representation of driving-related features such as free space and obstacles from offlinedata, and then applying online RL to these features to learn a fast driving policy. The online RLphase proceeds autonomously, automatically recovering from failures and improving with each lap.We demonstrate FastRLAP in challenging environments on a custom 1/10th-scale RC car modifiedfor real-world online RL. FastRLAP can autonomously practice and learn aggressive maneuversover time, improving by up to 40% over the demonstration lap and achieving performance closeto a human expert. Notably, the online training phase typically takes less than 20 minutes (and aslittle as 5 minutes), depending on the size of the environment. During this time, the robot learnscomplex maneuvers such as drifting, avoiding low-speed or bumpy areas, and maintaining a racingline, without requiring high-speed human demonstration or explicit reward for these behaviors. Thetraining requires no human interventions and is fully autonomous. To the best of our knowledge,FastRLAP is the first instantiation of a vision-based mobile robotic system that uses model-free RLto autonomously practice high-speed driving maneuvers and improve online in the real world.2 Related Work(a) (b ) ( c) Figure 3: High-speed visual navigation faces chal-lenges including: (a) noisy odometry and localiza-tion, (b) overexposure and motion blur, and (c) terrain-dependent over-/under-steer.Leveraging prior data to bootstrap online learn-ing has been widely studied in the con-text of supervised learning [9], representationlearning [10], continual learning [11–15], andRL [8]. Offline RL in particular has provenpowerful due to its ability to directly learn policies from large datasets, which can be fine-tunedthrough online interaction [16–19]. This has enabled a variety of robotic systems leveraging a com-bination of offline data and online interaction to perform real-world manipulation tasks [20–22],typically in controlled spaces in which the offline data consists of many high-performance demon-strations with the same robot in the target environment. In contrast, FastRLAP operates with onlylow-performance (slow) data primarily from other robots and environments, and the majority ofbehaviors in the resultant policy do not appear in the original dataset.Existing approaches for learning high-speed driving typically rely either on highly accurate positioninformation to define states [23–26], localize visual observations relative to a high-fidelity global2map [27, 28], or operate via behavioral cloning against some privileged expert [29]. This is pro-hibitive in unstructured environments, where (i) onboard state estimates can be highly inaccurate,and (ii) generating a high-fidelity map is difficult or impossible. FastRLAP learns high-speed drivingdirectly from vision, and improves its behavior by self-practice without using privileged state.Prior success in learning visual navigation policies typically requires large-scale simulated data [30–33], passive data [34], human interventions [35, 36] or real-world data from other robots [37]. Whilethese modalities (in particular, simulation) are typically used to overcome the high number of sam-ples often required by RL algorithms, we demonstrate that it is possible to train such a policy inreasonable time with only real-world interactions, opening the door to policies reflecting complexrelationships between vision, dynamics, and terrain (Fig. 3) that might be difficult to simulate.Several works have studied autonomous real-world RL via safety or reset-free training [38–43] withapplications in robotic manipulation, locomotion, and mobility [20, 44–46]. We draw inspirationfrom these works to build a high-speed navigation system that uses a finite state machine to practicedriving around a circuit. FastRLAP can drive diverse courses 100+ meters in length and continuallyimproves its performance over the course of minutes rather than hours or days.3 Autonomous Practicing with RLThe objective of our high-speed visual navigation task is to drive through a race course, definedby a sequence of position checkpoints {ci}, in the minimum possible time. We assume access totwo sources of offline data, neither of which contains the desired high-speed behavior: a large-scaledataset representing common navigation behaviors executed on a different robot, and a small datasetincluding a single lap around the course at low speed. Our system aims to enable efficient end-to-endRL in the real world. FastRLAP has three components (see Fig. 1): a high-level finite state machine(FSM) for autonomous practicing (shown in blue), a representation of visual observations learnedvia offline RL (purple), and a sample-efficient RL algorithm for online learning (orange).3.1 Problem formulationWe frame this task as a Markov decision process M(S,A, p, r)with state (V, v, ω, α, g, a prev)∈ S.Here, V∈R128×128×9is a sequence of 3 RGB images; v, ω, α ∈R3denote the robot’s body-framelinear velocity, angular velocity, and linear acceleration; the goal gis a body-frame vector to thenext checkpoint, expressed as a unit vector and a distance; aprevis the previous action.In order to align the visual representations learned offline with those most useful for the online task,both offline learning and online training phases are structured to maximize the same reward, ensuringoptimal transfer between the two settings. To this end, we define the reward as the weighted sum ofthree components: speed-made-good , which is the dot product of the current velocity with the unitvector pointing towards the current goal; a collision penalty proportional to the magnitude of thecollision (measured by lateral acceleration) applied only when a collision is detected; and a fixedstuck penalty applied whenever the robot is determined to be “stuck” by the practicing system.3.2 Autonomous Practicing and Goal Checkpoint SelectionIn the autonomous learning setting, the robot is expected to learn in the environment without anyepisodic resets or human interventions. Early in training, the policy may reach irrecoverable states,such as collisions, or otherwise become stuck. Without a reset, the learning algorithm may fail dueto collapse in the state distribution [41]. To overcome this, we use a simple FSM that switchesbetween a simple collision recovery policy and the learned policy.When the RL policy reaches a checkpoint, the FSM selects a new goal corresponding to the nextcheckpoint in the course sequence {ci}, forcing the learner to practice reaching all of the checkpointgoals in sequence. The goal checkpoints ciare typically beyond line-of-sight (e.g., Fig. 1, blue), up3to 40 meters away. If the RL policy reaches an irrecoverable state (see Sec. 4), the FSM commandsan automatic recovery policy to provides a “pseudo-reset.”3.3 Online RL Training Algorithm 1: FastRLAPData: Navigation dataset D, slow demo Bslow1Keys: Pre-Training, Practicing, Online RL2while Encoder is not converged do3 s, a, s′,idx←LoadData( D)4 g←LoadFutureData( D, idx + Rand( H))5 r←ComputeReward( s,a,g)6 Train IQL((s, g), a, r, (s′, g))7while True do8 On Robot9 s←Observe()10 ifsneargthen11 g←NextCheckpoint (g)12 r←ComputeReward( sprev,aprev,g)13 SendToWorkstation( sprev,aprev,r,s,g)14 a∼π(φ(simage ), sproprio, g)15 Actuate( a)16 ifCollision orStuck then17 Execute recovery policy18 On Workstation19 ReceiveFromRobot( B)20 b←Sample( B),bd←Sample( Bslow)21 π, Q←Train RLPD(π, Q, b, b d)To maximize reward and continually improvelap times, we use off-policy RL [3, 47]. Off-policy algorithms benefit greatly from perform-ing many training steps for each environmentstep, known as the update-to-data ratio (UTD):high UTD leads to efficient learning, but suffersfrom overfitting [48]. To overcome this limita-tion, we use RLPD [49], which trains an en-semble of critics to avoid catastrophic overesti-mation and overfitting [50] and learns quicklyusing a combination of online interactions anda small amount of suboptimal, on-task data.We obtain this on-task data by collecting a sin-gleslow lap in the target environment. Whilethis data is very limited (under a minute in mostenvironments) and does not contain fast driv-ing behaviors, even suboptimal demonstationscan significantly accelerate online learning byavoiding critic collapse in early stages of train-ing [49]. During online training, we sample50% of each training batch from this low-speed data, interleaved with 50% of data collected on-line. We found this to be critical to the efficiency of our system in our evaluations (Sec. 5.1).3.4 Representation Learning with Offline RLWhen training image-to-action policies, end-to-end RL allows gradients from the control objectiveto optimize the encoder. This results in a task-specific encoder that produces features that are mostrelevant to the agent’s task, rather than general features (e.g., features necessary for classification orvideo prediction). Unfortunately, training directly on full images is very computationally expensiveand unacceptably reduces the UTD ratio. Ideally, we would prefer to pre-train some encoder toproduce task-relevant features offline , and then freeze the encoder during online training.We address this by training the encoder with offline RL on an existing large-scale dataset with asimilar objective in a different setting. In particular, we use RECON [51], a large-scale navigationdataset collected by manually driving a Clearpath Jackal UGV outdoors at low speeds. This datasetcontains navigation trajectories from many environments and an entirely different robot, but impor-tantly does notinclude aggressive high-speed driving. Thus, the role of pre-training is not to teachthe robot how to drive quickly, but only to extract a relevant representation to simplify the onlinelearning problem. The high-speed driving behaviors necessary to solve the desired task must belearned through practice in the real world, building on this pre-trained foundation.We apply goal-conditioned offline RL by selecting a 1:1 mixture of random goals and goals fromthe robot’s future trajectory in this dataset, and use Implicit Q-Learning [52] to train a critic network(illustrated in Fig. 1, purple). We then take the learned encoder, which now encodes features relevantto the navigation task, and freeze it for training the policy and critic (orange) as illustrated in Sec. 3.3.4 System Design for Online LearningWe instantiate FastRLAP on a 1/10th-scale autonomous car [53, 54]. Our system is based on aTraxxas Slash 4 ×4 modified to facilitate online learning. See the appendix for a full parts list.4Indoor-A5 10 15 20 2540608010010 20 30 4050100150200Outdoor-DIndoor-B5 10 15 20 25 30501001502005 10 15 20 25 3050100150200250Outdoor-EIndoor-C2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.05010015020 40 60 80200400600800Sim-FFigure 4: FastRLAP achieves high-speed driving in diverse environments via autonomous practicing. FastR-LAP improves lap times (best-so-far shown in black) given a slow demo lap (green) and achieves near-expertlap times (red) in under 40 minutes.Sensing: Since our high-speed system operates directly on visual observations, we use a forward-facing fisheye camera to obtain a low-latency stream of 128 ×128 RGB images. The policy alsodepends on IMU data and motor speed as well as relative checkpoint position from a state estimator.Indoor state estimation: Indoors, we use a RealSense T265 tracking camera, mounted facing theceiling, to estimate the robot’s pose and velocity.Outdoor state estimation: We mount a GPS receiver onboard the robot. To estimate absolute headingθ, we use an extended Kalman filter to fuse wheel odometry vw, absolute GPS velocity ⃗ v, and angularvelocity ωwith dynamics θt+1=θt+ω∆tand measurement model ⃗ v= (vwcosθ, vwsinθ).Compute: We use an NVIDIA Jetson Orin NX for onboard compute. We process visual obser-vations onboard using a pre-trained encoder (Sec. 3.4), and offload training to a workstation with aGTX 1080ti GPU. We implement our algorithm in JAX [55] and compile several training steps into asingle function, allowing ∼800 actor-critic updates per second (vs. ∼80 without any optimizations).Actuation: Standard RC motors exhibit “cogging”, a stuttering behavior depending heavily on the(unobserved) rotor position. We instead use a sensored motor to provide closed-loop sequencing.Action space: The action space consists of a steering angle and a target speed, which is limited to4.5m/s across most of our experiments to avoid damaging the robot. This limitation is addressed inSec. 5.2 to safely learn a policy driving at higher speeds. To ensure smooth actions, (i) we use ashifted tanh linearized around aprevto constrain the action to [aprev−δ, a prev+δ], and (ii) we appendthe previous action to the observation. See the appendix for details.Detecting blocked states: The state machine triggers a pseudo-reset and delivers negative rewardwhen the robot collides, detected by high lateral acceleration, or is not moving for 3 seconds.5 Faster Lap Times with FastRLAPIn this section we present an experimental evaluation of FastRLAP in a variety of real-world andsimulated environments. We consider several metrics to analyze the peak performance, as well ascumulative metrics during practice. The time-to-first-lap (T2F) represents the time taken to completethe first full collision-free lap, starting from scratch. We track the best lap time achieved duringtraining as well as the median time of last five laps completed to capture the converged behavior.5Figure 5: Emergent behaviors with FastRLAP. Maximizing speed with RL results in a “racing line”, brakingas late as possible to maintain speed in and out of tight corners (a) and drifting on slick surfaces (c). Outdoors,tall grass slows the robot’s motion, promoting driving on paths (b). In Sim-F , the robot infers that the bridge isfaster than driving through mud via visual correlation (d).T2F Lap Times (s) # CollisionsEnv. (min) Best Median Demo Expert†MedianIndoor-A 4.07 32.7 39.0 54 25 0Indoor-B 12.41 44.2 65.7 70 43 3Indoor-C 7.27 10.9 11.7 17 7 0Outdoor-D 11.04 17.1 22.7 43 18 0Outdoor-E 19.29 62.1 94.0 160 40 3Sim-F 8.11 104.1 107.0 286 112 0Sim-G 41.54 18.0 18.1 36 19 0Outdoor-D(Schedule)21.5 13.1 23.4 43 18 0Table 1: Summary of experiments: FastRLAPrapidly learns fast driving policies in environmentsof varying difficulty levels, improving over the demolapby over 40% and achieving lap times within 5%of the expert, using only egocentric observations.Figure 6: Sample trajectories of FastRLAP practic-ing in Indoor-C . FastRLAP recovers from collision(green) and learns collision-free navigation (orange).Maximizing speed, FastRLAP discovers a smoothracing line (purple). The 3D scan is shown only forillustration.Additionally, we list the median collisions in the last five laps to capture safety. To contextualize ourresults, we provide timing for laps driven in each environment by human drivers watching the robotfrom a third-person view (“Human Expert”), as well as the duration of the “slow demo” lap.We used the same hyperparameters (network architecture, learning rate, etc.) for all experiments,both in the real-world and simulation. See the appendix for a full list of hyperparameter values,detailed laptime plots for all baselines, and additional qualitative analysis.5.1 Real-World DeploymentWe deploy FastRLAP in several diverse environments to demonstrate autonomous practicing. Be-fore training, we manually drive the robot around the course for a slow lap to define the rough layoutof the track. This lap is used in two ways: (i) to generate a sequence of sparse checkpoints {ci}nci=1for the practicing FSM described in Sec. 3.2, and (ii) to provide a low-speed demonstration foroff-policy actor-critic updates as described in Sec. 3.3.We test in five real-world environments: three indoor and two outdoor, labeled A-E, shown in Fig. 4.We also test in two simulated environments, Sim-F andSim-G . Environments include challengingfeatures such as large scale ( Indoor-C ,Outdoor-E ,Sim-F ), tight or cluttered navigation ( Indoor-B ,Outdoor-E ,Sim-G ), and highly terrain-dependent speed that must be inferred by correlating visualobservations to proprioceptive speed measurements during training ( Outdoor-D ,Outdoor-E ,Sim-F). All environments are described in detail in the appendix.Table 1 and Fig. 4 summarize the performance of our system in these environments. FastRLAP isable to consistently improve over the low-speed demonstration lap in a handful of laps, and nearlymatch human performance in Indoor-B andOutdoor-D in 30 minutes of real-world practice, with-out any human interventions. As training progresses, the achieved lap times continue to decrease,with the robot’s path becoming smoother as a secondary effect of optimizing speed Fig. 6.6Emergent behaviors : Maximizing the reward for reaching checkpoints quickly leads to severalemergent behaviors. The system learns a racing line, optimizing speed through corners and chicanes(a). In Fig. 5(a), the robot maintains speed through the apex of a tight corner, braking sharply toturn and accelerating out to minimize driving time. On a low-friction surface (c), the policy over-steers slightly, achieving fast rotation without braking. Outdoors, the learned policy prefers smooth,high-traction areas on and around concrete paths (b), avoiding tall grass that hinders motion.5.2 Even Faster Laps: Scheduling Speed Limits20 30 40 50 60Training time (minutes)2030405060708090Lap time (seconds)Best so farExpertFastRLAP , fixed limitFigure 7: Lap time progression with scheduledincreases in the speed limit. Note log scale.The action space described in Sec. 4 allows limitsto be adjusted arbitrarily during training. Leverag-ing this property, we push the limits of FastRLAPinOutdoor-D , increasing the action bounds linearlyover time from an initial maximum speed of 2.5m/sto a final speed of 6.75m/s. By increasing the max-imum commanded speed smoothly from a slow ini-tial limit, FastRLAP first learns basic behavior in thelow-speed setting that then transfers to high-speed driving, without causing crashes at high speeds.We see that in the Outdoor-D setting FastRLAP is able to learn a much more aggressive policy thanwith the original limits, both quantitatively (Fig. 7) and qualitatively (see the videos on our website).5.3 Comparative AnalysisWe compare the performance of FastRLAP against several baselines and ablations in Indoor-C andSim-G to demonstrate the importance of each of the components of our method: pre-trained visualrepresentations, online RL starting from a slow demo lap, and autonomous recovery behaviors tohandle the reset-free environment. Specifically, we consider the following variations:No Demo Lap: Ablate the demonstration lap and use only online data.No Pre-Training: Ablate pre-training and use DrQ [56] to learn the encoder from scratch.No Pseudo-Resets: Ablate the scripted pseudo-resets, requiring the robot to learn recovery behavior.ImageNet Pre-Training: Ablate task-specific pre-training and instead train the encoder for imageclassification on ImageNet [57, 58] (with the same encoder structure).DINOv2 Pre-Training: Uses DINOv2 ViT-S [59], a self-supervised vision transformer, as the en-coder in place of RL pre-training. The much larger model introduces roughly 40ms of actor latency.Offline RL: Ablate online learning and use a policy trained purely offline with 15 minutes of expertdata from the replay buffer of a successful run of FastRLAP using IQL [52].State-Based: Replaces visual observations with privileged state (absolute x, y, θ from VIO).In both Indoor-C andSim-G , FastRLAP outperforms the ablation with no demo lap in both time-to-first lap (T2F) and best lap time while causing fewer collisions (Tab. 2). The demo lap helps therobot make progress early in training, enabling broad state coverage and better final performance.Removing pseudo-resets causes the robot to become stuck, causing similarly poor performance.While initializing FastRLAP with a general-purpose pre-trained visual encoder (DINOv2 and Im-ageNet) gives reasonable performance in simulation, its performance is comparatively poor in thereal-world Indoor-C . This suggests that while general-purpose visual features are sufficient for lowspeeds, high-speed navigation requires task-specific features. Training the encoder online achievesgood asymptotic performance, but takes a long time to complete its first collision-free lap and im-proves relatively slowly due to a reduced UTD ratio (Fig. 8). Our approach also outperforms thevariant with access to privileged state information, suggesting that the pre-trained features generalizebetter than a simple localization estimate.7NameIndoor-C Sim-GT2F Best Median Collisions T2F Best Median CollisionsState-Based 11.07 12.7 18.8 4 9.5 ±2.0 21.1 ±1.7 26.2 0No Demo Lap 14.64 16.0 62.6 12 9.8±2.2 20 .5±0.9 22.2 0Offline RL [60] ∞ – – – – – – –No Pre-Training 10.34 12.7 20.0 1 10.3±2.2 19.3±1.3 18.4 0ImageNet Encoder 10.05 19.7 29.7 1 8.2±1.4 21 .0±2.7 22.1 0DINOv2 Encoder [59] 16.09 17.0 34.8 4 8.6±1.7 20 .6±1.3 25.6 0FastRLAP (Ours) 7.27 10.9 13.3 0 6.9±0.9 19.3±0.1 18.1 0Human FPV – 11.1 14.4 2 – 18.6 18.9 0Human Oracle†– 7.3 8.8 0 – – – –Table 2: Comparing to baselines : In real and simulated environments, FastRLAP has faster time to first lap(T2F), best/median lap times, and median collisions, and achieves near-human performance. Offline RL doesnot complete a collision-free lap. T2F listed in minutes; other times in seconds; lower is better for all metrics.Simulation results are reported as mean ±std. dev over 3 seeds.6 DiscussionFigure 8: Lap times for baselines in Sim-G .We presented a system for learning high-speeddriving with reinforcement learning from richobservations, practicing autonomously in thereal world. Our approach uses representationsfrom prior data to initialize the policy, followed by sample-efficient online RL and a checkpoint-based navigation strategy to recover autonomously from collisions and continue practicing. Al-though deep RL is often believed to be inefficient and difficult to use in the real world, we demon-strate that with appropriate pre-training and system design it is possible to learn effective drivingstrategies in less than 20 minutes of real-world training. This result may seem quite surprising whenviewed in contrast to prior work that uses simulated data [30] or hundreds of hours of training [61],and it provides strong validation that deep RL, in conjunction with task-specific pre-training andapproximate resets, can indeed be a viable tool for learning real-world policies from raw images.A qualitative investigation of the policies learned by our system also reveals interesting emergent be-havior. Although we bootstrap training with prior data (in other domains and from other robots) anda single slow demonstration lap, the learned policies exhibit behaviors deviating significantly fromthe dataset including drifting, selecting for high-speed terrain, and maintaining a racing line. Thus,the online RL process not only robustifies existing behavior, as observed in prior work [21], but alsoacquires new emergent behaviors by building on the foundation established by the prior data. Ourablations establish the importance of task-relevant pre-training, supporting the notion that represen-tations learned from diverse robot navigation data serve as an effective foundation for downstreamskill learning — much as pre-training enables efficient fine-tuning for vision and NLP [62, 63].Limitations and future work: While our system enables highly effective image-based driving, itdoes have several limitations. First, the current implementation requires a coarse state estimator toprovide a vector to the next checkpoint. This could be addressed in future work by specifying futuregoals in another format, such as images [64]. Second, our system does not explicitly account forsafety during the training process: the agent will learn to avoid collisions because they lead to taskfailure, but high-speed collisions during training could cause damage. Future work could include aconservative or risk-aware formulation to counteract this effect. Nevertheless, we believe that ourwork represents a step towards RL-based systems that can autonomously learn highly performantnavigation skills in a wide range of domains.AcknowledgmentsThis research was partially supported by DARPA RACER, ARL DCIST CRA W911NF-17-2-0181,the National Science Foundation through IIS-2150826, and the Office of Naval Research. The au-thors would like to thank Alejandro Escontrela, Noriaki Hirose, and Philippe Hansen-Estruch, fortheir help with running experiments and providing baseline implementations.8References[1] M. Bojarski et al. End to end learning for self-driving cars, 2016. 1[2] M. Bansal, A. Krizhevsky, and A. Ogale. ChauffeurNet: Learning to Drive by Imitating theBest and Synthesizing the Worst. In Robotics: Science and Systems , 2019. 1[3] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller.Playing atari with deep reinforcement learning. arxiv , 2013. 2, 4[4] J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lock-hart, D. Hassabis, T. Graepel, et al. Mastering atari, go, chess and shogi by planning with alearned model. Nature , 2020.[5] S. Gu, E. Holly, T. Lillicrap, and S. Levine. Deep reinforcement learning for robotic manip-ulation with asynchronous off-policy updates. In IEEE International Conference on Roboticsand Automation (ICRA) , 2017.[6] O. Kroemer, S. Niekum, and G. Konidaris. A review of robot learning for manipulation:Challenges, representations, and algorithms. JMLR , 2021. 2[7] S. Lange, T. Gabel, and M. Riedmiller. Batch reinforcement learning. Reinforcement learning:State-of-the-Art , 2012. 2[8] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline Reinforcement Learning: Tutorial, Review,and Perspectives on Open Problems, 2020. 2[9] S. Ross, G. Gordon, and D. Bagnell. A Reduction of Imitation Learning and Structured Pre-diction to No-Regret Online Learning. In International Conference on Artificial Intelligenceand Statistics (AISTATS) , 2011. 2[10] D. Yarats, A. Zhang, I. Kostrikov, B. Amos, J. Pineau, and R. Fergus. Improving sampleefficiency in model-free reinforcement learning from images. In AAAI Conference on ArtificialIntelligence , 2021. 2[11] S. Thrun and T. M. Mitchell. Lifelong robot learning. Robotics and Autonomous Systems ,1995. 2[12] Y . Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In 26th AnnualInternational Conference on Machine Learning , 2009.[13] C. Florensa, D. Held, X. Geng, and P. Abbeel. Automatic goal generation for reinforcementlearning agents. In International Conference on Machine Learning , 2018.[14] T. Matiisen, A. Oliver, T. Cohen, and J. Schulman. Teacher–student curriculum learning.IEEE Transactions on Neural Networks and Learning Systems , 31(9):3732–3740, 2020. doi:10.1109/TNNLS.2019.2934906.[15] S. Sukhbaatar, Z. Lin, I. Kostrikov, G. Synnaeve, A. Szlam, and R. Fergus. Intrinsic motivationand automatic curricula via asymmetric self-play. In Intl. Conf. on Learning Representations(ICLR) , 2018. 2[16] A. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learningwith offline datasets. arXiv , 2020. 2[17] A. Villaflor, J. Dolan, and J. Schneider. Fine-tuning offline reinforcement learning with model-based policy optimization, 2021.[18] T. Xie, N. Jiang, H. Wang, C. Xiong, and Y . Bai. Policy finetuning: Bridging sample-efficientoffline and online reinforcement learning. In Neural Information Processing Systems , 2021.9[19] S. Lee, Y . Seo, K. Lee, P. Abbeel, and J. Shin. Offline-to-online reinforcement learning viabalanced replay and pessimistic q-ensemble. In Conf. on Robot Learning , 2021. 2[20] H. R. Walke, J. H. Yang, A. Yu, A. Kumar, J. Orbik, A. Singh, and S. Levine. Don’t start fromscratch: Leveraging prior data to automate robotic reinforcement learning. In OpenReview ,2022. 2, 3[21] A. Kumar, A. Singh, F. Ebert, Y . Yang, C. Finn, and S. Levine. Pre-Training for Robots:Offline RL Enables Learning New Tasks from a Handful of Trials. arXiv , 2022. 8[22] N. G ̈urtler, S. Blaes, P. Kolev, F. Widmaier, M. Wuthrich, S. Bauer, B. Sch ̈olkopf, and G. Mar-tius. Benchmarking offline reinforcement learning on real-robot hardware. In Intl. Conf. onLearning Representations (ICLR) , 2023. 2[23] J. Funke, P. Theodosis, R. Hindiyeh, G. Stanek, K. Kritatakirana, C. Gerdes, D. Langer,M. Hernandez, B. M ̈uller-Bessler, and B. Huhnke. Up to the limits: Autonomous audi tts.InIEEE Intelligent Vehicles Symposium , 2012. doi:10.1109/IVS.2012.6232212. 2[24] N. Keivan and G. Sibley. Realtime simulation-in-the-loop control for agile ground vehicles. InTowards Autonomous Robotic Systems , 2013.[25] G. Williams, P. Drews, B. Goldfain, J. M. Rehg, and E. A. Theodorou. Aggressive drivingwith model predictive path integral control. IEEE International Conference on Robotics andAutomation (ICRA) , pages 1433–1440, 2016.[26] U. Rosolia and F. Borrelli. Learning How to Autonomously Race a Car: A Predictive ControlApproach. IEEE Trans. on Control Systems Technology , 2020. 2[27] P. Drews, G. Williams, B. Goldfain, E. A. Theodorou, and J. M. Rehg. Aggressive deepdriving: Combining convolutional neural networks and model predictive control. In Conf. onRobot Learning , 2017. 3[28] P. Drews, G. Williams, B. Goldfain, E. A. Theodorou, and J. M. Rehg. Vision-based high-speed driving with a deep dynamic observer. IEEE Robotics and Automation Letters , 2019.doi:10.1109/LRA.2019.2896449. 3[29] Y . Pan, C.-A. Cheng, K. Saigol, K. Lee, X. Yan, E. A. Theodorou, and B. Boots. Imitationlearning for agile autonomous driving. The International Journal of Robotics Research , 2020.3[30] A. Loquercio, E. Kaufmann, R. Ranftl, A. Dosovitskiy, V . Koltun, and D. Scaramuzza. Deepdrone racing: From simulation to reality with domain randomization. IEEE Transactions onRobotics , 2020. 3, 8[31] A. Loquercio, E. Kaufmann, R. Ranftl, M. M ̈uller, V . Koltun, and D. Scaramuzza. Learninghigh-speed flight in the wild. Science Robotics , 2021.[32] F. Fuchs, Y . Song, E. Kaufmann, D. Scaramuzza, and P. D ̈urr. Super-human performance ingran turismo sport using deep reinforcement learning. IEEE Robotics and Automation Letters ,6(3):4257–4264, 2021. doi:10.1109/LRA.2021.3064284.[33] T. Gervet, S. Chintala, D. Batra, J. Malik, and D. S. Chaplot. Navigating to objects in the realworld. ArXiv , abs/2212.00922, 2022. 3[34] M. Chang, A. Gupta, and S. Gupta. Semantic visual navigation by watching youtube videos.InAdvances in Neural Information Processing Systems , 2020. 3[35] A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J. Allen, V . Lam, A. Bewley, and A. Shah.Learning to drive in a day. CoRR , 2018. 310[36] G. Kahn, P. Abbeel, and S. Levine. Land: Learning to navigate from disengagements. IEEERobotics and Automation Letters , 2021. doi:10.1109/LRA.2021.3060404. 3[37] D. Shah, A. Sridhar, A. Bhorkar, N. Hirose, and S. Levine. GNM: A General Navigation Modelto Drive Any Robot. In arXiV , 2022. 3[38] W. Han, S. Levine, and P. Abbeel. Learning compound multi-step controllers under unknowndynamics. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,2015. 3[39] C. Richter and N. Roy. Safe visual navigation via deep learning and novelty detection. InRobotics: Science and Systems , 2017.[40] B. Eysenbach, S. Gu, J. Ibarz, and S. Levine. Leave no Trace: Learning to Reset for Safeand Autonomous Reinforcement Learning. In Intl. Conf. on Learning Representations (ICLR) ,2018.[41] H. Zhu, J. Yu, A. Gupta, D. Shah, K. Hartikainen, A. Singh, V . Kumar, and S. Levine. TheIngredients of Real World Robotic Reinforcement Learning. In Intl. Conf. on Learning Repre-sentations (ICLR) , 2020. 3[42] K. Lu, A. Grover, P. Abbeel, and I. Mordatch. Reset-free lifelong learning with skill-spaceplanning. In Intl. Conf. on Learning Representations (ICLR) , 2021.[43] A. Sharma, K. Xu, N. Sardana, A. Gupta, K. Hausman, S. Levine, and C. Finn. Autonomousreinforcement learning: Formalism and benchmarking. In Intl. Conf. on Learning Representa-tions (ICLR) , 2022. 3[44] A. Gupta, J. Yu, T. Z. Zhao, V . Kumar, A. Rovinsky, K. Xu, T. Devlin, and S. Levine. Reset-freereinforcement learning via multi-task learning: Learning dexterous manipulation behaviorswithout human intervention. In IEEE International Conference on Robotics and Automation(ICRA) , 2021. 3[45] S. Ha, P. Xu, Z. Tan, S. Levine, and J. Tan. Learning to walk in the real world with minimalhuman effort. In Conference on Robot Learning , 2020.[46] C. Sun, J. Orbik, C. M. Devin, B. H. Yang, A. Gupta, G. Berseth, and S. Levine. Fullyautonomous real-world reinforcement learning with applications to mobile manipulation. InConf. on Robot Learning , 2022. 3[47] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. In International Conference on Machine Learning , 2019. 4[48] P. D’Oro, M. Schwarzer, E. Nikishin, P.-L. Bacon, M. G. Bellemare, and A. Courville. Sample-efficient reinforcement learning by breaking the replay ratio barrier. In Deep ReinforcementLearning Workshop NeurIPS 2022 . 4[49] P. J. Ball, L. Smith, I. Kostrikov, and S. Levine. Efficient online reinforcement learning withoffline data, 2023. 4[50] X. Chen, C. Wang, Z. Zhou, and K. Ross. Randomized Ensembled Double Q-Learning: Learn-ing Fast Without a Model, Mar. 2021. arXiv:2101.05982 [cs]. 4[51] D. Shah, B. Eysenbach, N. Rhinehart, and S. Levine. Rapid exploration for open-world navi-gation with latent goal models. 2021. 4[52] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.arxiv , 2021. 4, 7[53] MIT RACECAR, 2014. URL https://racecar.mit.edu . 411[54] M. O’Kelly, H. Zheng, D. Karthik, and R. Mangharam. F1TENTH: An Open-source Eval-uation Environment for Continuous Control and Reinforcement Learning. In NeurIPS 2019Competition and Demonstration Track , 2020. 4[55] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula,A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transfor-mations of Python+NumPy programs, 2018. URL http://github.com/google/jax . 5[56] I. Kostrikov, D. Yarats, and R. Fergus. Image Augmentation Is All You Need: RegularizingDeep Reinforcement Learning from Pixels, Mar. 2021. 7[57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hier-archical image database. In IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , 2009. 7[58] S. Parisi, A. Rajeswaran, S. Purushwalkam, and A. Gupta. The unsurprising effectiveness ofpre-trained vision models for control. In International Conference on Machine Learning , 2022.7[59] M. Oquab et al. Dinov2: Learning robust visual features without supervision, 2023. 7, 8[60] D. Shah, A. Bhorkar, H. Leen, I. Kostrikov, N. Rhinehart, and S. Levine. Offline reinforcementlearning for visual navigation. In Conf. on Robot Learning , 2022. 8[61] E. Wijmans, A. Kadian, A. Morcos, S. Lee, I. Essa, D. Parikh, M. Savva, and D. Batra. DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames. In Intl. Conf. onLearning Representations (ICLR) , 2020. 8[62] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. arXiv , 2018. 8[63] X. Chen, H. Fan, R. Girshick, and K. He. Improved baselines with momentum contrastivelearning. arxiv , 2020. 8[64] D. Shah and S. Levine. ViKiNG: Vision-Based Kilometer-Scale Navigation with GeographicHints. In Robotics: Science and Systems XVIII , 2022. 812 |
n9lew97SAn | Action-Quantized Offline Reinforcement Learning forRobotic Skill LearningJianlan Luo Perry Dong Jeffrey Wu Aviral Kumar Xinyang Geng Sergey LevineUniversity of California, BerkeleyAbstract: The offline reinforcement learning (RL) paradigm provides a generalrecipe to convert static behavior datasets into policies that can perform better thanthe policy that collected the data. While policy constraints, conservatism, andother methods for mitigating distributional shifts have made offline reinforcementlearning more effective, the continuous action setting often necessitates variousapproximations for applying these techniques. Many of these challenges are greatlyalleviated in discrete action settings, where offline RL constraints and regularizerscan often be computed more precisely or even exactly. In this paper, we proposean adaptive scheme for action quantization. We use a VQ-V AE to learn state-conditioned action quantization, avoiding the exponential blowup that comes withnaïve discretization of the action space. We show that several state-of-the-art offlineRL methods such as IQL, CQL, and BRAC improve in performance on benchmarkswhen combined with our proposed discretization scheme. We further validate ourapproach on a set of challenging long-horizon complex robotic manipulation tasksin the Robomimic environment, where our discretized offline RL algorithms areable to improve upon their continuous counterparts by 2-3x. Our project page is atsaqrl.github.ioKeywords: Offline Reinforcement Learning, Discretization1 IntroductionOffline reinforcement learning (RL) aims to learn policies from static databases of previously collectedexperience. As opposed to pure imitation, offline RL holds the promise to transform logged datasetsinto policies that perform better than the behavior policy that collected the dataset by maximizingtask rewards. The key challenge in offline RL is the overestimation of the value of actions that werenot seen in the dataset, which destabilizes training and leads to policies that perform much worsethan their value function estimates would suggest. To address this issue, a wide variety of methodshave been proposed recently [ 1,2,3,4,5]. Typically, these methods employ some mechanism to stay“close" to behavior policies, such as policy constraints or value conservatism.While these methods in principle address overestimation and distributional shift, in practice im-plementing these approaches requires various approximations (depending on the method) that canlead to hyperparameter sensitivity or performance that is worse than their theoretical formulationmight suggest. This issue is particularly pronounced on “narrow” datasets that consist of relativelydeterministic behavior, such as the demonstration data that is often used in robotic learning. Our keyobservation is that these issues can be mitigated by properly discretizing continuous action spaces,and then employing discrete action versions of these methods, where many of these approxima-tions are unnecessary. For example, policy constraints or any conservatism regularizer that requiresexpectations over actions can be computed exactly with discrete actions.Unfortunately, naïvely discretizing the action space can result in an exponential blowup in thenumber of actions, while coarse adaptive discretization methods can lead to imprecise actions. In thispaper, we propose an adaptive scheme, state-conditioned action quantization (SAQ), for discretizaing7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.continuous action spaces. We perform state-conditioned action discretization by utilizing a VQ-V AEmodel. SAQ is based on a simple insight: given a particular dataset, there are often only a fewin-distribution options available in each state, corresponding roughly to the “primitives” that aresupported under the data. This allows us to use comparatively very small discrete action spaces,without suffering from the curse of dimensionality, while still enjoying the benefits of discrete actionspaces and simpler offline RL algorithm implementations.The main contribution of this work is a practical approach, SAQ, for learning quantized actionrepresentations for improving continuous-action offline RL methods on a variety of robotic learningtasks. We present a general method for learning state-conditioned action discretizations and then applythis method with three offline RL methods: conservative Q-learning (CQL) [ 3], implicit Q-learning(IQL) [ 4], and behavior regularized actor-critic (BRAC) [ 1]. All of these methods require somesort of approximation with continuous actions, while the discrete version provides for a convenientimplementation that avoids such approximations. We find that the discrete version of each methodimplemented with SAQ generally yields improved performance on commonly used benchmarktasks over each method’s continuous-action counterpart, particularly on “narrow” datasets (e.g.,expert data). We also evaluate these methods on a set of challenging robotic manipulation tasksfrom the Robomimic environment [ 6], where continuous offline RL methods struggled to get goodperformance, and find that our approach outperforms both prior offline RL methods in this setting bya large margin.2 Related WorkOffline RL. Some offline RL methods use a policy constraint to mitigate overestimation fromdistributional shift, constraining the learned policy to stay close to the data via density modeling orsome other divergence measure [ 1,2,7,8,9,10,11,12,13,4,5,14]. Another line of work directlyregularizes Q values of unseen actions, which we refer as conservative value function methods[3,15,16,17,18,19,20]. While both types of methods can be formulated in ways that in principleaddress distributional shift, in practice they require some kind of approximation to be applied withcontinuous actions, either in estimating the behavior policy, or in computing integrals or expectationsover the action space to formulate pessimism penalties. This introduces approximation errors andcomplexity. We show that our approach can avoid the need for such approximations for threerepresentative methods from both categories, leading to improved performance.Discretizing continuous action space for control. Recent work has shown discretizing continuousactions can yield good performance, and several discretization strategies have been introduced. Toaddress the issue of exponential growth of actions in the naïve discretization scheme, one line ofmethods assumes independence of action dimensions [ 21,22,23,24,25,26], or perform discretizationin an autoregressive way [ 27,28,29]. Our work is different in that we perform state-dependantdiscretization, which adaptively controls the precision of the discretization scheme. Perhaps theclosest to our work is Dadashi et al. [30], which also learns a state-dependant discretization. Itassumes access to a human demonstration dataset. Given the current state, it uses a neural network toindex a set of plausible continuous actions and trains the network by comparing its output with thedemonstration data. Our work differs in that we focus on the offline RL setting, where discretizationcould enforce conservatism and policy constraints exactly. To our knowledge, our work is the first topropose adaptive action discretization with VQ-V AEs for offline RL.3 PreliminariesThe RL problem is formally defined by a Markov decision processes (MDPs) M= (S,A, T, r, μ 0, γ),whereSandAdenote the state and action spaces, and T(s′|s,a),r(s,a)represent the dynamics andreward function, respectively. μ0(s)denotes the initial state distribution, and γ∈(0,1)denotes thediscount factor. The objective of RL is to learn a policy that maximizes the return (discounted sum ofrewards): max πJ(π) :=E(st,at)∼π[Ptγtr(st,at)].In offline RL, we are provided with an offline2dataset D={(s,a, r,s′)}of transitions collected using a behavior policy πβ, and our goal is to findthe best possible policy only using the given dataset, without any additional online data collection. Inthis paper, we focus on three offline RL methods, conservative Q-learning (CQL), implicit Q-learning(IQL), and behavior regularized actor-critic(BRAC); though our approach could likely be extended toother methods also.Conservative Q-learning. Naïvely learning a Q-value function from the offline dataset (e.g., viaQ-learning or FQI) suffers from OOD actions [ 31,2,32], and the CQL algorithm [ 3] applies aregularizer R(θ)to prevent querying the target Q-function on unseen actions. R(θ)minimizes theQ-values under the policy π(a|s), and counterbalances this term by maximizing the values of theactions in D. Formally, we have the following objective during Bellman backup:minθ12Es,a,s′∼Da′∼πhQθ(s,a)−r−γ ̄Q(s′,a′)2i+αEs∼D,a∼π[Qθ(s,a)]−Es,a∼D[Qθ(s,a)],(1)where ̄Qdenotes the target Q-function.Behavior regularized actor-critic. The specific version BRAC-v explicitly subtracts the divergenceD(π(·|s′), πβ(·|s′))from the target value while performing the Bellman update. Additionally, sincethe divergence between the learned policy and the behavior policy at the current state is not a part ofthe Q-function, BRAC-v also explicitly adds the divergence value at the current state to the policyupdate. We instantiate the version of BRAC that uses the KL-divergence:DKL(π(·|s), πβ(·|s)) =Ea∼π(·|s)[logπ(a|s)−logπβ(a|s)]. (2)To estimate the second term logπβ(a|s), BRAC trains an additional behavior policy, that we denoteasˆπβ. Denoting the policy and the Q-function as πφandQθ, the BRAC-v perform following Bellmanupdate:minθEs,a∼Dhr(s,a) +γEa′∼πφ(·|s′)[ ̄Qθ(s′,a′) +βlog ˆπβ(a′|s′)]−Qθ(s,a)2i, (3)then extracts a policy by:maxφEs∼Da∼πφ(·|s)[Qθ(s,a) +βlog ˆπβ(a|s)−αlogπφ(a|s)] (4)Implicit Q-Learning. Instead of applying policy constraints or critic regularizations, IQL usesexpectile regression to learn a value function to approximate an expectile τover the distribution ofactions by optimizing the value objectiveLV(ψ) =E(s,a)∼D[Lτ2(Qˆθ(s,a)−Vψ(s))],where Qˆθ(s,a)is a parameterized target critic and Lτ2(u) =|τ−I(u <0)|u2. This value functionis then used to update the Q-function with TD error following the objective functionLQ(θ) =E(s,a,s′)∼D[(r(s,a) +γVψ(s′)−Qθ(s,a))2]With the value function Vψand Q-function Qˆθ(s,a), the policy is learned using advantage weightedregression following the loss below:Lπ(φ) =E(s,a)∼D[exp( α(Qˆθ(s,a)−Vψ(s))) log πφ(a|s)].Vector quantized variational autoencoders. Our method uses the VQ-V AE [ 33] as part of thediscretization process. The VQ-V AE consists of an encoder and a decoder. The encoder networkparameterizes a posterior distribution qφ(z|x)over discrete latent variables zgiven the input datax. The decoder that parameterizes the distribution pθ(x|z)then reconstructs the observations fromthese discrete variables. Specifically, the encoder first outputs a continuous embedding eφ(x)∈RD.Discretization is done with a codebook defined on RK×D, composed of Kvectors ej∈RD, wherej= 1,2,3...K. The embedding eφ(x)is compared with all the vectors in the codebook, and the3Figure 1: Left: Training SAQ. Right : Discrete policy training/evaluation with SAQ. SAQ learns ascalar discrete representation of continuous actions using state-conditioned VQ-V AE. During policytraining and evaluation time, we use the decoder of SAQ to reconstruct the continuous actions fromthe discrete policy actions.discrete state zis chosen to correspond to the nearest codebook entry. The discrete posterior is thendefined as:q(z=k|x) =1,ifk= arg minj∥e(x)−ej∥20,otherwise(5)Let’s denote zqφ(x)as the final latent code from the encoder, i.e., zqφ(x) =ek. The decoder willreconstruct the original input xbased on zqφ(x). Additionally, a codebook loss and a commitmentloss are added to move codebook vectors and encoder embeddings close to each other. The overalltraining objective is as follows:L(θ, φ, x ) = Ee∈ej,j=1,2,3...Kh−logpθ(x|zqφ(x)) +∥sg[eφ(x)]−e∥22+∥eφ(x)−sg[e]∥22i,(6)where sgstands for the stop gradient operator. We can then optimize the parameters of the encoderand decoder with this objective.4 State Conditioned Action Quantization for Offline RLThe central premise of this paper is that carefully and efficiently discretizing the action space of acontinuous-action RL problem can make it significantly simpler to implement offline RL methodsbased on critic or actor regularization, and that such discretized methods can attain better performanceon especially difficult data distributions, such as narrow datasets. How should we discretize acontinuous action space? Discretizing too coarsely can make it difficult to perform delicate tasksthat require fine control, and can collapse in-distribution and out-of-distribution actions into the samebin, making offline RL more difficult. On the other hand, naïvely using a very fine discretizationcan be intractable due to the curse of dimensionality. Our key observation is that we can construct adiscretization with relatively few discrete actions but with minimal loss of resolution if we employ alearned state-conditioned discretization, which we can accomplish by means of a VQ-V AE model. Inthis section, we will describe this approach, and discuss how it can be combined with simple andeffective discrete-action offline RL methods.4.1 State-Conditioned Action Discretization via Scalar-Quantized Auto-EncodersAbstractly, we aim to learn a state-conditioned action discretization (SAQ) scheme that can map acontinuous action aat a given state sto a discrete variable bafor training, and then given a choice ofthe discrete variable baalso map it back to the original action space Afor evaluation.Given this requirement, a natural choice is to utilize auto-encoders that can produce state-conditioneddiscrete codes for a given action. Therefore, our approach adapts the vector-quantized variationalauto-encoder (VQ-V AEs) [ 33] framework for performing action discretization, with the differencethat unlike conventional VQ-V AEs that typically learn a multi-dimensional discrete latent code for aninput, we only aim to learn a scalar discrete latent code for a given input action as this is sufficient forperforming downstream RL.4More formally, denoting the parameters of the VQ-V AE encoder with φand the decoder with θ,we wish to learn an encoder qφ(·|s, a)that produces a one-dimensional discrete latent action, and adecoder to map the discrete action code back to the original action space, pθ(a|s,ba). Following thetraining procedure of VQ-V AEs, we first obtain a latent embedding zqφ(s, a)from the approximateposterior qφ(·|s, a)and then compare it within the codebook, and obtain the nearest vector ek. Next,we run this latent action code through the decoder to obtain an approximate reconstructed action, ̃a∼p(·|s, ek), which is further trained to minimize the reconstruction error against the originalaction apassed as input to the encoder. Following the training procedure proposed by Van den Oordet al. [33], the overall training objective for SAQ is shown in Equation 7, where sgis the operator ofstopping gradient, Dis the offline dataset:minθ,φEs,a∼De∈ej,j=1,2,3...Kh−logpθ(a|zqφ(s,a)) +∥sg[eφ(s,a)]−e∥22+∥eφ(s,a)−sg[e]∥22i(7)4.2 Offline RL with Discretized ActionsAfter training VQ-V AE to obtain a discrete action representation, we can transform the originalproblem with continuous action into a discrete problem by learning policies and value functions inthe discrete action space.SAQ-CQL. For CQL, we follow the discrete variant of CQL [ 3], where we learn the Q-functionwith a conservatism penalty and parameterize the policy as π(a|s)∝Q(s,a). Specifically, weemploy the maximum entropy variant of CQL and replace the sampled log-integral-exp with log-sum-exp by summing over all discrete actions. This allows us to compute the CQL conservatism lossexactly , without having to rely on the samples from policy to estimate the integral of Q-function. Ouroverall objective of SAQ-CQL is thereforeminθ12Es,a,s′∼Da′∼πhQθ(s,a)−r−γ ̄Q(s′,a′)2i+αEs,a∼D"logXiexp(Qθ(s,ai))−Qθ(s,a)#.We note that in this specific variant of discrete CQL, the conservative penalty termEs,a∼D[logPiexp(Qθ(s,bai))−Qθ(s,ba)]is exactly equivalent to the discrete negative log-likelihood behavioral cloning loss:Es,a∼D"logXiexp(Qθ(s,bai))−Qθ(s,ba)#=−Es,a∼Dlogexp(Qθ(s,ba))Piexp(Qθ(s,bai))=−Es,a∼D[logπθ(ba|s)]. (8)SAQ-IQL. For IQL, we follow the original problem formulation in Kostrikov et al. [4], then deriveits closed-form solution in the discrete case. We denote the advantage as Aπ(s,ba) =Qπ(s,ba)−Vπ(s)for a given policy π, which can be obtained by the same procedure detailed in Kostrikov et al. [4].The policy extraction step in IQL objective is to find a policy π⋆so that Aπ(s,a)can be maximizedwhile staying close to behavior policy πβ:π⋆= arg max Eba∼π(·|s)[Aπ(s,ba)]s.t.DKL(π(·|s), πβ(·|s))≤ε. (9)We can solve this constrained optimization in Eq. 9 by using the Lagrangian method, and the solutionis given by:π⋆(ba|s)∝exp(1λAπ(s,ba) + log πβ(ba|s)), (10)where λis the Lagrangian multiplier, which controls the amount of constraint deviation. We referreaders to Appendix A for detailed derivation. In practice, we can obtain πβby training an additionalBC policy on the behavior dataset.5SAQ-BRAC For SAQ-BRAC, we first learn a categorical discrete-action behavior policy πβ(a|s)on the discretized actions by maximizing the log-likelihood of the offline dataset. We then implementdiscrete BRAC by exactly following Eq. 3 and Eq. 4, where we compute the KL-divergence betweenthe learned policy and behavior policy by simply summing over all discrete actions.4.3 Diagnosing offline RL performance with constraint enforcementAs conjectured in Sec. 1, we associate inexact constraint enforcement with the degradation ofperformance of offline RL methods. To verify this hypothesis, in this section, we conduct anexperiment in a pointmass navigation environment (maze2d-large from Fu et al. [34]), where the RLagent controls a point mass to navigate through the maze from a fixed starting point to the goal point,as shown on the left side of Figure 2. We collect three optimal demonstration trajectories as the offlinedataset and run both continuous CQL and SAQ-CQL on this domain. We visualize the average returnin the middle as well as the sample estimated (shown in blue) and the exact CQL conservatism penalty(shown in green) on the right of Figure 2. We observe that the performance of continuous CQL rapidlydegrades with more training steps, with the estimated conservative penalty R(θ)diverging from theexact value in the period where the performance degrades. The error in estimation causes the policytraining to overly penalize the wrong action samples, resulting in degraded performance. On thecontrary, SAQ-CQL computes the penalty exactly, which enables the optimizer to smoothly minimizethe penalty, leading to robust policy performance. While it may be that with more hyperparametertuning the baseline continuous-action version would perform better, the results suggest that poorperformance in this case corresponds to the inexact estimate of the CQL regularizer, which is largelymitigated by our discretization approach.0 1000 2000 3000 4000 5000Gradient steps0.00.20.40.60.81.0Average returnContinuous CQLDiscrete CQL0 1000 2000 3000 4000 5000Gradient steps7.55.02.50.02.55.07.510.0CQL conservative penalty () Constinuous CQL estimated ()Discrete CQL ()Constinuous CQL exact ()Figure 2: Left: visualization of the maze2d environment. Middle : the average return of SAQ-CQLand continuous CQL over the course of offline training. Right : estimated and exact CQL conservatismpenalty values. The estimated conservative penalty for continuous-action CQL diverges significantlyfrom the true value during training, resulting in rapid performance degradation, while the SAQ-CQLenjoys stable training by optimizing the exact penalty value. As a result, the training process (middle)is significantly more stable with SAQ, which we expect should make the algorithm easier to use,simplifying checkpoint selection and tuning.5 ExperimentsOur experiments compare three continuous-action offline RL methods to their discretized variantsinstantiated with SAQ, both on standard offline RL benchmarks and the Robomimic robotic manipu-lation environment [ 6]. We combine SAQ with: CQL [ 3], IQL [ 5], and BRAC [ 1]. We first evaluateon the D4RL [ 35] suite of tasks, and then use the Robomimic tasks [ 6], which prior work has foundto present a particular challenge for offline RL methods. Appendix B presents a detailed analysis andablation study of SAQ to understand how it affects offline RL training, which we encourage readers toexamine for a more detailed study of the method. We also present a comparison with Aquadem [ 30]in Appendix C.5.1 D4RL Benchmark EvaluationsWe present the results on the D4RL benchmark suite [ 34] in Table 5, with illustrations of the evaluationdomains shown in Figure 3. Some of the benchmark tasks consist of narrow data distributions, while6Figure 3: D4RL benchmark tasks: locomotion, antmaze, adroit and kitchen.Figure 4: Robomimic tasks: lift ,can,square ,tool-hang ,transport . Among thesetasks, square features multi-modal behavior, tool-hang requires high-precision manipulation,transport is particularly challenging because its long-horizon complex multi-modal nature.others contain high-coverage data. For example, the “kitchen-” domains consist of data from experthuman teleoperators controlling a robotic arm to perform a variety of manipulation skills, whichmust then be sequenced and recombined by the algorithm to solve specific tasks. The “-expert”and“-medium-expert” datasets for the locomotion tasks also contain narrow expert data, possiblyin combination with broad but suboptimal data, with the “-medium-expert” version presenting aparticular challenge in terms of picking out the narrow but high-performing expert mode from thenoisier overall distribution. The “adroit” dexterous manipulation tasks also contain narrow expertdemonstration data from humans controlling the simulated robotic hand with a data glove. Thenarrow datasets are particularly challenging for prior continuous-action methods, as the narrowbehavior policies exacerbate challenges due to imperfect approximations for policy constraints andconservative regularizers. This is particularly pronounced for high-dimensional domains, such as the24-DoF “adroit” robotic hand, where methods that require integrals or expectations over the actionspace (such as the regularizer in CQL) must sample and average over a high-dimensional action space.We see that for each algorithm (BRAC, IQL, and CQL), the version of each algorithm discretized withSAQ improves with respect to the average score in each of the domain types (locomotion, antmaze,adroit, and kitchen). Although in some cases the improvement is small, in other cases it is verysignificant, particularly for narrow-data adroit and kitchen tasks, and the “-medium-expert” versionsof the halfcheetah and hopper task. We hypothesize that SAQ performs well in these domains becausethe discretization can capture the individual (narrow) modes, while the discrete-action RL algorithmcan then select from among these modes to attain the best performance.While SAQ can lead to significant improvements over the continuous-action counterpart of eachmethod, the main benefit of SAQ is not in raw performance, but in terms of the simplification of thedownstream RL problem. Our experiments provide an “apples to apples” comparison for each RLmethod, comparing its discrete (SAQ) version to its original continuous formulation, but of course,other offline RL methods might perform better on some tasks, and we do not aim to show that SAQachieves state-of-the-art performance over all possible offline RL techniques. However, we expectthat the simple discrete-action RL problem that is presented via our proposed discretization will inthe long run make it easier to scale offline RL to harder and more complex problem domains, andprovide a valuable tool to the RL practitioner. RL methods tend to be complex and difficult to tune,so any improvement that simplifies the RL part of the problem is likely to improve practical utility.5.2 Robomimic EvaluationRobomimic [ 6] is a set of environments and demonstration datasets that require controlling a 7-DoFrobot arm to perform a variety of manipulation tasks. Prior work [ 6] reported that these environmentsare particularly challenging for offline RL algorithms, with the best-reported results obtained by7Task BRAC SAQ-BRAC IQL SAQ-IQL CQL SAQ-CQL BC SAQ-BClocomotion avg 64.16 66.98 75.14 76.67 75.6 77.35 38.86 60.77antmaze avg 18.84 29.5 55.34 55.92 55.15 56.87 0 0adroit avg 23.86 40.09 20.32 22.82 10.58 21.37 12.35 21kitchen avg 15.78 36.11 50.83 58.61 48.67 65.89 17.22 57Table 1: Averaged normalized scores across locomotion, Adroit, AntMaze, and kitchen domains fromD4RL. The version of each algorithm discretized with SAQ generally improves over the averagescore of the original algorithm in each class of domains, with particularly pronounced improvementson narrow dataset domains such as adroit and kitchen. The full results are deferred to Appendix CTask IQL SAQ-IQLCQL RobomimicCQLSAQ-CQLBC RobomimicBCSAQ-BClift 58 90 64.2 92.7 90.8 59.47 100 90.13can 33.73 68 19.6 38 71.2 31.73 95.3 66.4square 26.93 46.67 0 5.3 44.27 19.33 78.7 45.33tool-hang 2.67 28 0 0 3.87 1.87 17.3 3.47transport 0 2 0 0 3.47 0.27 29.3 3.2average 24.27 46.93 16.76 27.2 42.72 22.53 64.12 41.71Table 2: Average success rates on Robomimic tasks using the Proficient Human dataset for each task.simpler imitation learning methods. Using the “proficient human” (PH) datasets, which each consistof 200 successful trajectories with a binary reward, we trained policies with the continuous version ofIQL, CQL, and BC, as well as discrete-action policies using the discretization from SAQ. The results,presented in Table 2, show that SAQ indeed improves continuous offline RL by large margins on allthe tasks considered. It’s worthwhile to mention that our continuous BC results don’t exactly matchthe original paper [ 6], as the authors point out, they adopt a Gaussian Mixture Model (GMM) forthe policy class and extensively optimize parameters then perform checkpoint selection, whereas wedirectly train unimodal BC policies.While the D4RL results suggest that discretization with SAQ can consistently improve the perfor-mance of each offline RL algorithm, these results further suggest that domains that are especiallychallenging for offline RL, such as narrow demonstration datasets, are particularly amenable forSAQ, where it can enable offline RL methods that were previously outperformed by simple imitationlearning to attain significantly better results. This suggests that SAQ can be a particularly effectivetool in robotic learning, where narrow demonstration datasets might be commonplace.6 Discussion, Limitations, and Future WorkWe presented a method for state-conditioned action quantization to improve continuous offlineRL algorithms. Our approach allows offline RL methods to enforce policy constraints or valueconservatism more exactly as compared to their continuous counterparts. This is particularly relevantand important in the robotic learning setting where we usually assume narrow datasets of expertdemonstrations where function approximation errors get even more exaggerated. However ourapproach does have a number of limitations. First, we require sufficient state-action coverage tobe able to perform state-conditioned action quantization. This is a common assumption in offlineRL and is true in many curated datasets; however, it might be challenging to obtain such datasetsin real-world robotic settings. That said, our approach is able to achieve impressive performance inthe Robomimic environment which largely composed of real human teleoperation data. Second, it isnot clear the best way to adopt our method in the online finetuning setting, where new data mightinvalidate the learned discretization. Adaptively adjusting the discretization during online trainingcould be a valuable topic to explore in future work.8AcknowledgmentsThis research was partly supported through the Office of Naval Research through N00014-21-1-2838 and N00014-20-1-2383. We acknowledge computing support from the Berkeley ResearchComputing(BRC) program and the NSF Cloudbank program.References[1]Y . Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning. arXivpreprint arXiv:1911.11361 , 2019.[2]A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine. Stabilizing off-policy q-learning viabootstrapping error reduction. In Advances in Neural Information Processing Systems , pages11761–11771, 2019.[3]A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforcementlearning. arXiv preprint arXiv:2006.04779 , 2020.[4]I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.arXiv preprint arXiv:2110.06169 , 2021.[5]I. Kostrikov, J. Tompson, R. Fergus, and O. Nachum. Offline reinforcement learning with fisherdivergence critic regularization. arXiv preprint arXiv:2103.08050 , 2021.[6]A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, L. Fei-Fei, S. Savarese,Y . Zhu, and R. Martín-Martín. What matters in learning from offline human demonstrationsfor robot manipulation. In 5th Annual Conference on Robot Learning , 2021. URL https://openreview.net/forum?id=JrsfBJtDFdI .[7]S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. In Proceedings of the 36th International Conference on Machine Learning , 2019.[8]N. Jaques, A. Ghandeharioun, J. H. Shen, C. Ferguson, A. Lapedriza, N. Jones, S. Gu, andR. Picard. Way off-policy batch deep reinforcement learning of implicit human preferences indialog. arXiv preprint arXiv:1907.00456 , 2019.[9]X. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple andscalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177 , 2019.[10] A. Nair, M. Dalal, A. Gupta, and S. Levine. Accelerating online reinforcement learning withoffline datasets. arXiv preprint arXiv:2006.09359 , 2020.[11] Z. Wang, A. Novikov, K. ̇Zołna, J. T. Springenberg, S. Reed, B. Shahriari, N. Siegel, J. Merel,C. Gulcehre, N. Heess, et al. Critic regularized regression. arXiv preprint arXiv:2006.15134 ,2020.[12] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. arXivpreprint arXiv:2106.06860 , 2021.[13] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operationalspace control. In International Conference on Machine Learning , 2007.[14] N. Y . Siegel, J. T. Springenberg, F. Berkenkamp, A. Abdolmaleki, M. Neunert, T. Lampe,R. Hafner, and M. Riedmiller. Keep doing what worked: Behavioral modelling priors for offlinereinforcement learning. arXiv preprint arXiv:2002.08396 , 2020.[15] T. Xie, C.-A. Cheng, N. Jiang, P. Mineiro, and A. Agarwal. Bellman-consistent pessimism foroffline reinforcement learning. Advances in neural information processing systems , 34, 2021.9[16] O. Nachum, B. Dai, I. Kostrikov, Y . Chow, L. Li, and D. Schuurmans. Algaedice: Policygradient from arbitrary experience. arXiv preprint arXiv:1912.02074 , 2019.[17] T. Yu, G. Thomas, L. Yu, S. Ermon, J. Zou, S. Levine, C. Finn, and T. Ma. Mopo: Model-basedoffline policy optimization. arXiv preprint arXiv:2005.13239 , 2020.[18] T. Yu, A. Kumar, R. Rafailov, A. Rajeswaran, S. Levine, and C. Finn. Combo: Conservativeoffline model-based policy optimization. arXiv preprint arXiv:2102.08363 , 2021.[19] Y . Jin, Z. Yang, and Z. Wang. Is pessimism provably efficient for offline rl? arXiv preprintarXiv:2012.15085 , 2020.[20] S. Rezaeifar, R. Dadashi, N. Vieillard, L. Hussenot, O. Bachem, O. Pietquin, and M. Geist.Offline reinforcement learning as anti-exploration. arXiv preprint arXiv:2106.06431 , 2021.[21] A. Tavakoli, F. Pardo, and P. Kormushev. Action branching architectures for deep reinforcementlearning. 2018.[22] N. Vieillard, M. Andrychowicz, A. Raichuk, O. Pietquin, and M. Geist. Implicitly regularizedrl with implicit q-values. In Proceedings of The 25th International Conference on ArtificialIntelligence and Statistics , 2022.[23] Y . Tang and S. Agrawal. Discretizing continuous action space for on-policy optimization. 2019.[24] Y . Chebotar, Q. Vuong, K. Hausman, F. Xia, Y . Lu, A. Irpan, A. Kumar, T. Yu, A. Herzog,K. Pertsch, K. Gopalakrishnan, J. Ibarz, O. Nachum, S. A. Sontakke, G. Salazar, H. T. Tran,J. Peralta, C. Tan, D. Manjunath, J. Singh, B. Zitkovich, T. Jackson, K. Rao, C. Finn, andS. Levine. Q-transformer: Scalable offline reinforcement learning via autoregressive q-functions.In7th Annual Conference on Robot Learning , 2023. URL https://openreview.net/forum?id=0I3su3mkuL .[25] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. In Thirty-Sixth Conference on Neural Information Processing Systems ,2022.[26] T. Seyde, P. Werner, W. Schwarting, I. Gilitschenski, M. Riedmiller, D. Rus, and M. Wulfmeier.Solving continuous control via q-learning. arXiv, 2022.[27] L. Metz, J. Ibarz, N. Jaitly, and J. Davidson. Discrete sequential prediction of continuous actionsfor deep rl. arXiv, 2017.[28] C. Tessler, G. Tennenholtz, and S. Mannor. Distributional policy optimization: An alternativeapproach for continuous control. In Advances in Neural Information Processing Systems , 2019.[29] A. Tavakoli, M. Fatemi, and P. Kormushev. Learning to represent action values as a hypergraphon the action vertices. In International Conference on Learning Representations , 2021.[30] R. Dadashi, L. Hussenot, D. Vincent, S. Girgin, A. Raichuk, M. Geist, and O. Pietquin.Continuous control with action quantization from demonstrations. In K. Chaudhuri, S. Jegelka,L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors, Proceedings of the 39th InternationalConference on Machine Learning , volume 162 of Proceedings of Machine Learning Research ,pages 4537–4557, 17–23 Jul 2022.[31] S. Fujimoto, D. Meger, and D. Precup. Off-policy deep reinforcement learning without explo-ration. arXiv preprint arXiv:1812.02900 , 2018.[32] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review,and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.10[33] A. Van den Oord, O. Vinyals, and k. kavukcuoglu. Neural discrete representation learning. InI. Guyon, U. V . Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett,editors, Advances in Neural Information Processing Systems , volume 30. Curran Associates,Inc., 2017.[34] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[35] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning. In arXiv , 2020. URL https://arxiv.org/pdf/2004.07219 .11AppendixA Detailed SAQ-IQL derivationIn this section, we describe in detail the derivation for the SAQ-IQL algorithm. We start with theoriginal optimization objective from IQL that employs an explicit policy constraint w.r.t. to a behaviorpolicyπ∗= arg maxπ∈ΠEa∼π(·|s)[Aπ(s,a)]s.t.DKL(π(·,s)||πβ(·|s))≤εXaπ(a|s) = 1(11)We can write down the Lagrangian of 11 and expand it out:L(π, λ, α ) =Ea∼π(·|s)[Aπ(s,a)] +λ(ε−DKL(π(·|s)||πβ(·|s))) + α(1−Xaπ(a|s))L(π, λ, α ) =Xa∼π(·|s)(π(a|s)Aπ(s,a))+λ(ε−Xa∼π(·|s)(π(a|s) log(π(a|s)πβ(a|s))))+ α(1−Xaπ(a|s))Differentiating L(π, λ, α )with respect to π(a|s)results in∂L(π, λ, α )∂π=Aπ(s,a)−λ(log(π(a|s))−log(πβ(a|s)) + 1) −αWe divide the differentiated Lagrangian by the constant λ∂L(π, λ, α )∂π·1λ=Aπ(s,a)λ+ log( πβ(a|s))−log(π(a|s))−1−αλand set the resulting expression to 0 to arrive at π∗(a|s),π∗(a|s) = exp[Aπ(s,a)λ+ log( πβ(a|s))−1−αλ]π∗(a|s) =1z(s)exp[Aπ(s,a)λ+ log( πβ(a|s))]where z(s)is a normalizing term.12B Ablation Studies for SAQTaskNoStateStateNoStateStateNoStateStateSAQ- SAQ- SAQ- SAQ- SAQ- SAQ-CQL CQL IQL IQL BRAC BRAChalfcheetah-medium-replay-v21.56 47.07 1.56 36.2 -1.6 40.25hopper-medium-replay-v215.24 94.73 11.74 59.43 21.56 68.87walker2d-medium-replay-v24.67 81.72 6.89 45.64-0.2553.52average 7.16 74.51 6.73 47.09 6.57 54.21Table 3: Comparing the performance of state-conditioned action discretization against unconditionedaction discretization with CQL. The state-conditioned discretization scheme significantly outperformsthe unconditioned one since unconditioned action discretization cannot compress the action spaceinto few number of bins.Comparing discretization methods. To understand the importance of the state-conditioned dis-cretization method, we compare it against a naive discretization method where the VQ-V AE discretizesthe actions without conditioning on the states and present the results in Table 3. We see that thestate-conditioning allows is indeed highly important in compressing the action space into a smallnumber of bins, resulting in much higher performance than a state-agnostic discretization scheme.Codebook size robustness. One key design choice we make in this paper is the use of a VQ-V AE,one natural question is then our method’s robustness against the codebook size in the VQ-V AE; sincethat can be crucial in determining the quality of the performed discretization through it. Towards thisend, we empirically experiment with varying codebook sizes across all three algorithms. We presentthe results in Table 4, and we found that our method’s performance is consistent across codebooksizes; which further resonates with the practical utility of adopting our method.Codebook Size 16 32 64 128SAQ-CQL 108.7 111.6 110.8 103.2SAQ-IQL 106.9 104.8 104.2 94.42SAQ-BRAC 106.5 108.3 105.7 107Table 4: Comparing the performance of SAQ-IQL, SAQ-CQL, and SAQ-BRAC on hopper-expert-v2while varying the codebook size. It can be observed that the discretized algorithms are invariant tocodebook size changes.Figure 5: Increasing policy constraintlevels on hopper-expert-v2 environmentfrom Gym locomotion.Controlling policy constraint levels. As stated in Sec.4.1,one key premise of SAQ is that we can enforce policyconstraint or value conservatism exactly; which associateswith the practical performance of offline RL methods. Tofurther verify this hypothesis empirically; we pick onetask from the Gym locomotion suite and vary the weightcoefficients for policy constraint or value conservatism toobserve the resulting performance. We present our resultsin Fig. 5, we can see that small constraint enforcementleads to poor performance initially; then the performanceramps up when we increase the coefficients; finally con-verges with sufficient large coefficients. This observationconfirms our conjecture in the paper.13C Additional Experiment ResultsTask BRAC SAQ-BRACIQL Aquadem-IQLSAQ-IQLCQL Aquadem-CQLSAQ-CQLBC Aquadem-BCSAQ-BChalfcheetah-expert-v2 67.99 64 94.78 92.73 90.3 43.9 92.66 92.65 91.53 92.69 108.4halfcheetah-medium-expert-v2 73.07 90.31 88.06 80.75 89.05 50.32 88.3 91.69 48.81 74.17 57.65halfcheetah-medium-replay-v2 -3.3 35.28 44.24 35.12 36.2 47.07 40.5 40.25 35.16 35.62 3.76halfcheetah-medium-v2 47.08 43 47.3 42.39 42.52 48.56 44.5 43.89 42.32 41.93 47.02hopper-expert-v2 67.34 110 108.8 109 100.3 100.5 110.3 109.9 98.8 110.1 92.63hopper-medium-expert-v2 50.74 98.75 32.85 54.57 81.56 67.08 86.7 98.47 44.48 57.9 58.94hopper-medium-replay-v2 36.81 32.54 61.75 38.05 59.43 94.73 90.3 68.87 17.96 20.94 32.93hopper-medium-v2 57.06 54.16 54.3 49.2 50.53 70.32 58.5 38.25 50.31 52.98 42.47walker2d-expert-v2 108.4 107.5 110.12 108.5 107.6 109.2 108.2 107.3 108.3 108.3 104.5walker2d-medium-expert-v2 109 102.9 109.56 108 100.3 110.7 108.1 108.6 91.79 83.55 103.2walker2d-medium-replay-v2 73.72 0.4 68.71 32.66 45.64 81.72 80.8 53.52 12.98 27.86 15.27walker2d-medium-v2 81.94 64.8 81.25 68.06 68 83.11 82.1 74.77 70.18 27.86 62.52locomotion average 64.16 66.98 75.14 71.01 76.67 75.6 82.58 77.35 59.39 61.16 60.77antmaze-medium-diverse-v2 26 47.6 76.67 40 68.33 72.75 22.67 75.47 0 0 0antmaze-medium-play-v2 48.67 56.93 78.67 46 74.33 67.04 30.67 68.67 0 1.3 0antmaze-large-diverse-v2 0.66 9.73 31.67 33.67 41 35.62 22.67 36 0 0 0antmaze-large-play-v2 0 3.73 34.33 14.33 40 45.18 25.33 47.33 0 0 0antmaze average 18.84 29.5 55.34 33.5 55.92 55.15 25.34 56.87 0 0.33 0door-human-v0 −1.0135.42 1.79 2.15 9.26 0.84 6.09 2.12 3.29 3.24 9.28hammer-human-v0 −1.4220.52 1.41 1.56 1.57 0.27 2.64 0.6 0.8 2.09 1.38pen-human-v0 98.15 98.41 69.69 73.91 80.25 41.24 85.66 82.73 45.28 72.12 73.3relocate-human-v0 −0.286 8.38 0.55 0.2 −0.050.28 0.02 0.04 0.48 0.02adroit average 23.86 40.09 20.32 19.54 22.82 10.58 23.67 21.37 12.35 19.48 21kitchen-mixed-v0 10.33 53.33 48.92 56.94 52.92 62 0 57.67 37.9 58.89 34kitchen-complete-v0 31.67 10 66 70.92 76.76 14 21.5 47.67 39.13 68 90.33kitchen-partial-v0 5.33 45 37.58 44.83 46.25 70 50 92.33 37.33 43.56 46.67kitchen average 15.78 36.11 50.83 57.56 58.61 48.67 23.83 65.89 38.12 56.82 57Table 5: Comparison of our method and Aquadem on various D4RL tasks. SAQ in general improvesover its continuous counterpart, especially in the narrow dataset setting; also outperforms Aquademin most tasks.14 |
zUiH8UUYDo | Scalable Deep Kernel Gaussian Process for VehicleDynamics in Autonomous RacingJingyun NingDepartment of Electrical and Computer EngineeringUniversity of Virginia United Statesjn2ne@virginia.eduMadhur BehlDepartment of Computer ScienceUniversity of Virginia United Statesmadhur.behl@virginia.eduAbstract: Autonomous racing presents a challenging environment for testing thelimits of autonomous vehicle technology. Accurately modeling the vehicle dy-namics (with all forces and tires) is critical for high-speed racing, but it remainsa difficult task and requires an intricate balance between run-time computationaldemands and modeling complexity. Researchers have proposed utilizing learning-based methods such as Gaussian Processes (GP) for learning vehicle dynamics.However, current approaches often oversimplify the modeling process or applystrong assumptions, leading to unrealistic results that cannot translate to real-world settings. In this paper, we proposed DKL-SKIP, a method combining deepkernel learning (DKL) with SKIP-GP, for vehicle dynamics modeling. Our ap-proach outperforms standard GP methods and the Numerical algorithms for Sub-space State Space System Identification technique (N4SID) in terms of predictionaccuracy. In addition to evaluating DKL-SKIP on real-world data, we also evalu-ate its performance using a high-fidelity autonomous racing AutoVerse simulator.The results highlight the potential of DKL-SKIP as a promising tool for modelingcomplex vehicle dynamics in both real-world and simulated environments.Keywords: Gaussian Processes, Vehicle Dynamics, Autonomous Vehicle, DeepKernel Learning1 IntroductionThe rising popularity of self-driving cars has inspired the growth of autonomous racing research(Betz et al., 2022). Researchers are developing algorithms for high-performance race vehicles whichaim to operate autonomously at the edge of the vehicle’s limits. Competitions in autonomous racinghave been held not only in simulation (Hartmann et al., 2021; Babu and Behl, 2020), but also on pro-totypes ranging from 1:43 scale RC cars (Carrau et al., 2016) to 1/10 scale (O’Kelly et al., 2019), andto full-size Indy racecars (Wischnewski et al., 2022; Jung et al., 2023). To optimize racecars’ perfor-mance and control for autonomous racing, it is essential to have a model that can precisely predictthe vehicle’s dynamics. Existing models (Althoff et al., 2017) range from simple point mass vehiclemodels and single-track models to complex multi-body vehicle models. However, the constructionof high-fidelity vehicle dynamics models based on physics is difficult due to the capture of non-linear behavior of components such as tires, suspension and steering systems. Moreover, obtainingprecise parameter values for these components can be expensive and time-consuming. For example,obtaining tire parameters such as tire stiffness, slip angle, slip ratio, etc. involves a combinationof performing tire testing experiments on tires test rigs and building mathematical models such asthe Pacejka tire model (Pacejka, 2005). Uncertain factors such as road conditions and driver inputs,along with complex subsystems like suspension and steering systems, further complicate the mod-eling process. Consequentially, researchers have shown interest in using learning-based approachesto address model-output discrepancies (Xing et al., 2020; Hermansdorfer et al., 2020). Specifically,Van Niekerk et al. (2017) proposed Gaussian Processes (GP) models for vehicle dynamics learning,7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.which has been further explored by other researchers (Jain et al., 2020; Kabzan et al., 2019; Hewinget al., 2018). However, existing work faces limitations, including scalability issues with GP models,reliance on simplified simulations, arbitrary GP kernel choices or no exploration of kernel learning,and the use of unrealistic race tracks.This paper proposes DKL-SKIP, a solution to address the mismatch between observations and statepredictions obtained with an extended-kinematic single-track model for vehicle dynamics modeling.The main contributions of our paper are stated as follows:1. We present DKL-SKIP, a new method that integrates deep kernel learning with the scalableSKIP-GP approach for vehicle dynamics modeling. This combination not only overcomesthe scalability challenges associated with the traditional GP but also enhances the robust-ness to noise, and irrelevant features.2. We evaluated DKL-SKIP on datasets collected from a full-scale autonomous Indy racecarin both real-world and simulating racing environments. We demonstrate our method’s ca-pability to accurately capture the non-linear dynamics of a racecar, especially at the limitsof its performance envelope.2 Related Work2.1 Gaussian processes for vehicle dynamics modelingHewing et al. (2018) pioneered the use of GP models to learn vehicle dynamics in a miniatureautonomous racecar, demonstrating their effectiveness in capturing lateral dynamics and developinga model predictive controller. Building on this, Kabzan et al. (2019) applied GP models to learnvehicle dynamics in a driverless electric racecar and developed a contouring model predictive controlapproach. Additionally, Jain et al. (2020) utilized exact GP models for vehicle dynamics learning inautonomous racing simulations at both 1:43 and 1:10 scales.However, these methods have the following limitations: (i) The conducted experiments were limitedto a small-scale or formula student racecar (Kabzan et al., 2019) unlike our use of a full-sized fullyautonomous racecar. (ii) Lack of real-world track layouts, limiting model generalizability. Ningand Behl (2023) have demonstrated that racetrack configurations and choices highly influence theaccuracy of the GP-based model. In this paper, we demonstrate that our method works on dataobtained from a real racecar running at real racetracks. (iii) The exact GP used in Bayesrace is notscalable to the volume of real-world data samples. In this paper, the training data is 50 times larger,rendering the Bayesrace method intractable. (iv) Previous work lacks of kernel learning exploration,and authors arbitrarily used kernel functions such as RBF or Mat ́ern kernel, while research indicates(Ning and Behl, 2023) that the choice of the kernel can substantially impact GP model performance.Therefore, in this work, we implement a deep neural network (DNN) for kernel learning for the GPmodels.2.2 Gaussian processes with deep learningThe idea of deep kernel learning, which combines deep learning with Gaussian processes, wasintroduced by Wilson et al. (2016), who demonstrated the potential of DKL-GP to achieve state-of-the-art results on several benchmarks. Bradshaw et al. (2017) investigated the robustness ofDKL-GP models against adversarial examples and transfer learning scenarios. Although the DKL-GP method has been employed for robotics and control tasks (Lee et al., 2022), to the best of ourknowledge, this paper is the first work of integrating GPs with deep learning approaches for vehicledynamics modeling. Specifically, we propose DKL-SKIP to learn the dynamics of a full-sized,real-world racecar by incorporating deep kernel in conjunction with sparse GPs.2Notation Vehicle Dynamicsx,y: Vehicle position Kinematic Dynamic E-Kinδ: Steering angle [ rad] ̇x=vcosψ ̇x=vxcosψ−vysinψ ̇x=vxcosψ−vysinψψ: Heading angle [ rad] ̇y=vsinψ ̇y=vxsinψ+vycosψ ̇y=vxsinψ+vycosψω: Yaw rate [ rad/s ] ̇δ= ∆δ ̇δ= ∆δ ̇δ= ∆δlf, lr: C.O.G. to front/rear axle [m] ̇ψ=vlr+lftanδ ̇ψ=ω ̇ψ=ωvx: Longitudinal velocity [ m/s ] ̇v=ax ̇vx=1m(Fx−Ffysinδ+mvyω) ̇vx=axvy: Lateral velocity [ m/s ] ̇vy=1m(Fry+Ffycosδ−mvxω) ̇vy=lrlr+lf(axψ+vxω)ax: Longitudinal acceleration [ m/s2] ̇ω=1Iz(lfFfycosδ−lrFry) ̇ω=axlr+lfsinδFfy, Fry: Tire lateral forces Ffy=Dsin(Carctan( Bαf−E(Bαf−arctan( Bαf))))Fx: Longitudinal force Fry=Dsin(Carctan( Bαr−E(Bαr−arctan( Bαr))))Table 1: Mathematical descriptions of different vehicle dynamics models. The vehicle dynamicsmodels have three components:(1) the kinematic single-track model, (2) the dynamic single-trackmodel, and (3) the extended-kinematic model used in this paper.3 Vehicle Dynamics BackgroundKinematic and dynamic single-track models are widely used based on a good trade-off betweensimplicity and accuracy. The single-track models simplify the vehicle by lumping the front and rearwheel pairs into one wheel, similar to a bicycle configuration (Althoff et al., 2017). The details ofdifferential equations for the kinematic model are shown in the second column of Table 1. xandyare the coordinates of the vehicle’s center of gravity (C.O.G.) with respect to the inertial frame.Velocity at C.O.G. of the vehicle is denoted by v.ψis vehicle inertial heading, and axis longitudinalacceleration. δis the steering angle that represents the angle between the vehicle heading and thefront wheel heading. The model states are {x, y, v, δ, ψ }, the inputs to the model are accelerationand steering rate {ax,∆δ}, and the parameters of the model are the distance between front axle andvehicle C.O.G., lf, and the distance between rear-axle and vehicle C.O.G., lr.[x, y]YFigure 1: Dynamic single-track vehiclemodel. Reference point: C.O.G.Although, easier to construct, kinematic models lackaccuracy at higher speeds as they neglect tire slip,thereby overlooking crucial effects like understeerand oversteer (Rajamani, 2011). On the other hand,a dynamic single-track model (Fig. 1) takes into ac-count the complex dynamics of a vehicle, includingthe effects of tire slip and the interaction betweentire forces and vehicle motion (Hwan Jeon et al.,2011). It considers additional factors such as lateralacceleration, yaw rate, and tire forces. This modelis more accurate and suitable for high-speed maneu-vers and scenarios where the vehicle operates closerto its physical limits. The state equations are denotedin the third column of Table 1, where Fxis the lon-gitudinal force, and Ffy,Fryare the lateral forces atthe front axle and rear axle, respectively. In additionto the states in kinematic models, the dynamic mod-els consider lateral speed, vy, as well as the slip angle of the tire, αrorαf, which is defined as theangle between the heading of the wheel and the velocity vector of the wheel, and used for derivingtire lateral forces Ffy,Fryusing the Pacejka equations (bottom rows in Table 1). However, themodel coefficients, BCDE , are expensive to obtain and need to be recalibrated every time for everynew racetrack, which increases the cost of implementing such a dynamic model.3Extended kinematic model In this work, we use an extended kinematic single-track (E-kin) model,as shown in Table 1, which preserves the simplicity of a kinematic model but has identical states to adynamic model. However, the states of the E-kin model and dynamic model differ in velocities andyaw rate, { ̇vx, ̇vy, ̇ω}. Specifically, unlike the dynamic models, the E-kin model uses measurablevariables, such as ax, ψ, δ , and two parameters, lf, lr, to approximate the longitudinal and lateralforces. Therefore, the E-kin model requires less effort in model calibration than a dynamic single-track model. On the other hand, these approximations cause the E-kin model to diverge from thereal dynamics. Therefore, when using the E-kin model to estimate the vehicle dynamics, we employGP models to calculate the discrepancies between the E-kin model and the racecar observations.4 Problem StatementIn this paper, our goal is to construct an extended kinematic single-track model for an autonomousracecar and use GPs to learn the mismatch between the E-kin model and the observed racecar dy-namics. In particular, we use racecar state measurements, denoted by set D:={d1, ..., d n}forndata samples, to address the differences between the E-kin model and the observed racecar dy-namics. Every data sample, di, consists of states, si:={xi, yi, vxi, vyi, ωi, ψi, δi}, and inputs,ui:={axi,∆δi}. The E-kin model is represented as fEkin , and the model output at time step tiscreated using the identical states, and inputs from the racecar measurements noted as fEkin(st, ut).Therefore, at time t, the model residual rtcan be expressed as in Eq(1).rt=dt+1−fEkin(st, ut) (1)We then use GPs for model residual learning:e(dt) =GP(dt) =rt+νt (2)where e(dt)is the GP function given data sample dt, and νt∼ N (0,1)is the additive Gaussiannoise. By doing so, we can construct a corrected dynamics model, fcorr, which is the combinationoffEkin and GP, to approximate the racecar’s observed dynamics. For instance, at time t, thecorrected model fcorr(st, ut)can be used to approximate racecar’s observed dynamics dt+1as:fcorr(st, ut) =fEkin(st, ut) +e(dt)≈dt+1 (3)As shown in Table 1, it is worth noting that the model differences between the observation and E-kin model outputs only occur in the velocity, vx, vy, and yaw rate ωstates. This implies that, in thiswork, R= [0,0, εvx, εvy, εω,0,0,0,0],Ris the set of model residuals, R:={r1, ...rn−1}There-fore, we learn three GP models, namely evx, evy, eω, to approximate model residuals in εvx, εvy,εω.In the next section, we describe the implementation of the DKL-SKIP method to learn these GPs.5 DKL-SKIPIn this section, we describe the DKL-SKIP method, which is a combination of deep kernel learningand a scalable GP, SKIP-GP. This combines the advantages of both DKL and SKIP-GP, and allowsus to apply it to capture the non-linear relationship for vehicle dynamics modeling mismatch.5.1 Deep kernel learningThe goal of deep kernel learning is to learn a kernel function for GPs. This is achieved by applyinga deep learning neural network for feature extraction before it is fed into the GPs, transforming theinputs into a higher-level representation. In GP, a kernel function is used to measure the similaritybetween inputs, which influences the predictions of GP models. One of the most commonly usedkernel functions in GP is the RBF kernel, denoted in Eq( 4).k(di, dj) =σ2exp−||di−dj||22l2, i, j∈ {1, ..., n} (4)4where di, djare the input data sample, i.e., racecar state measurements in this paper, σ2is thevariance, ||di−dj||is the Euclidean distance between inputs, lis the length-scale. Further details ofGP can be referred to Rasmussen (2003). In this paper, we transform a kernel k(di, dj;θ)to a deepkernel function, as shown in Eq( 5).k(di, dj;θ)→k(g(di, w), g(dj, w)|θ, w) (5)where g(d, w)is a deep learning model, such as a deep neural network. θis the kernel hyperpa-rameters, e.g., the length-scale ( l) of RBF kernel, and the wis the DNN weight. The deep kernelhyperparameters, denoted as γ={θ, w}, can be learned jointly by maximizing the log marginallikelihood. Moreover, DKL acts as a feature extractor, which transfers each input sample ( d) into alower-dimensional feature expression ( ̃d). Hence, the transformation of a kernel into a deep kernelcan then be expressed as in Eq(6). This enables DKL to capture the most representative informationfrom the input data as well as reduce the data dimensionality.k(di, dj;θ)→k( ̃di, ̃dj;γ) (6)5.2 SKIP-GPThe kernel learning process of GPs involves using the Cholesky decomposition of the kernel ma-trix,KDD, which is a matrix composed of kernel functions of ndata points {d1, ..., d n}. In thispaper, we use DKL to transform input data into lower-dimensional features ( ̃d) leading to a deepkernel matrix, Kdeep ̃D ̃Dwhich improves the scalability of GPs. However, computing this decompo-sition requires matrix-vector multiplies (MVMs), which still leads to O(n3)time and O(n2)spacecomplexity (Rasmussen, 2003). Therefore, we use SKIP-GP, proposed by Gardner et al. (2018), toreduce the computing complexity. There are two key components in SKIP-GP: (i) Structured kernelinterpolation (SKI) (Wilson and Nickisch, 2015); and (ii) Product kernel structure. First, given a setofminducing points, U={ ̃d1, ..., ̃dm}, where m≪n, the SKI computes kernel functions betweeninducing points and then approximates the true kernel functions:k( ̃di, ̃dj)≈w ̃diKUUwT ̃dj, i, j∈ {1, ..., m} (7a)Kdeep ̃D ̃D≈W ̃DKUUWT ̃D(7b)where W ̃Dis the sparse matrix composed of sparse vector ( w ̃d) that contains approximationweights. And thus, Eq(7b) denotes applying the approximation for all data. This method allowsSKI to achieve linear time and storage complexity. However, it leads SKI to an exponential timecomplexity in the dimensionality of the inputs. The SKIP-GP uses product kernel structure to ad-dress the curse of dimensionality in SKI: given the data with pdimensions, the kernel matrix of theproduct kernel structure can be expressed as Kdeep ̃D ̃D=K(1) ̃D ̃D×...×K(p) ̃D ̃Dwhere ×is element-wisemultiplication. And each component, K(i) ̃D ̃D, is approximated using SKI: K(i) ̃D ̃D=W(i) ̃DKUUW(i) ̃DT.This ensures SKIP-GP achieves linear time complexity even with high-dimensional inputs.Combining DKL with SKIP-GP by using inducing points to approximate the deep kernel matrix,Kdeep ̃D ̃D, the resulting kernel is denoted as KdeepUU. Then we can use this inducing kernel matrixto approximate the deep kernel matrix: Kdeep ̃D ̃D≈W ̃DKdeepUUW ̃DT. Therefore, the DKL-SKIPachieves linear time complexity while maintaining robustness and expressiveness.6 Experiment SetupWe validate our method using both real-world data and data obtained from AutoVerse, a high-fidelityracing simulator. All experiments were conducted on a Linux-based system equipped with sixteen3.8 GHz CPU cores, 16 GB of RAM, and a single NVIDIA GeForce RTX 3080 GPU.6.1 Real-world autonomous racing setupThe real data is obtained from a full-scale, fully autonomous racecar, shown in Figure 2 A. Theseracecars are engineered for the high-speed, autonomous racing competition in the Indy Autonomous5Figure 2: Experiment setup: AIAC full-sized, fully autonomous racecar, on CLas Vegas MotorSpeedway race track; BIAC racecar in AutoVerse simulator, with Drace track in Texas MotorSpeedway .Challenge (IAC). These vehicles have been customized with sensors such as LiDAR, GNSS, cam-eras, and radar, computing hardware, and other autonomy-enabling components. The data was col-lected during solo runs at speeds 80 mph at the Las Vegas Motor Speedway, Figure 2 C. Duringthese runs, state measurements were obtained by the Extended Kalman filter algorithm and loggedas ROS2 bag files. These bag files were processed to construct the data set, Dreal, consisting of{vx, vy, ψ, δ, ω, a x,∆δ, uT, uB}, which are longitudinal and lateral velocities, vehicle heading, yawrate, steering angle, longitudinal acceleration, steering velocity, throttle command, and braking pres-sure. This dataset is recorded at 25 Hz , divided into a training set encompassing 15,789 samples (631seconds) and a testing set, unseen to the model, comprising 6,432 data samples (257 seconds).6.2 Simulated autonomous racing setupFor more repeatable analysis, we also conduct experiments using AutoVerse, designed to replicatethe racing environment for data collection (Autonoma, 2023). As shown in Figure 2 B, AutoVersecan simulate vehicle simulator for the IAC racecars. We set up a single racecar scenario on the TexasMotor Speedway racetrack, shown in Figure 2 D, where the car utilizes a pure pursuit algorithmand runs at higher speeds, 130 mph , than those recorded during real-world data collection. Thisapproach aims to validate our method’s performance in learning vehicle dynamics at the limits of thevehicle’s performance. The data set, Dsim, recorded at 25 Hz is composed of training set consistingof 11,737 data samples (469 seconds), the testing set has 7,153 data samples (286 seconds).6.3 DKL-SKIP setupHere, we describe the setup of deep neural network feature extractor used to define the deep kernelof the GP model. In this work, we used a fully connected DNN with a [9−800−300−50−4]archi-tecture, where the numbers indicate the number of neurons in each layer. Specifically, the first layerhas 9 inputs corresponding to 9 features in the dataset, which are {vx, vy, ψ, δ, ω, a x,∆δ, uT, uB}.The final layer has 4 outputs, which is determined based on the experiments and means that theDNN maps to 4 final features. These output features are then passed to the SKIP-GP. ReLU acti-vation layers and dropout layers with 20% rate have been integrated into the DNN. Moreover, weconsider the scalable deep kernel learning with the RBF base kernel, as shown in Eq(4), which is apopular choice of base kernel. During the training process, we run 60 epochs of training using theAdam optimizer, and the learning rate is 0.02. Besides, both the hyperparameters of the deep kernel,θ, and the parameters of the DNN, w, are learned using Type-II maximum likelihood estimation.6(a) Real-world dataset (b) Simulation datasetFigure 3: Comparison of prediction performance among various models using mean predictionvalues: (1) The SKIP-GP model is represented by blue dotted lines; (2) Red dashed lines indicatethe DKL-SKIP model; and (3) The N4SID model is shown by purple dashed lines.6.4 Data preparationTo address the mismatch between the real racecar observations and the extended kinematic modeldata, we use the collected data Dexp, comprising of both the real-world data and the simulator data,Dexp≡Dreal∪Dsim. Given the parameters ( lf, lr) of the E-kin model fEkin are known, we canconstruct a dataset, DEkin , to capture the model outputs when excited with the same inputs andinitial conditions, DEkin ={fEkin(si, ui)},∀i∈ {1, . . . , n }, where si, uirepresent data sample ofstate measurements and control input of Dexp. For GPs training and testing, the input states consistof{vx, vy, ψ, δ, ω, a x,∆δ, uT, uB}, and the outputs are εvx, εvy, εω, respectively.7 ResultsIn this section, we evaluate DKL-SKIP, SKIP-GP, and a system identification technique, N4SID,proposed by Van Overschee and De Moor (1994). Our comparison focuses on their capabilities inpredicting the mismatch between the extended kinematic single-track model and observation.7.1 Real-world resultsFigure 3a shows the comparison between DKL-SKIP, SKIP-GP and N4SID for each error term(εvx, εvy,εω), and we can see that DKL-SKIP is able to accurately predict the mismatch betweenobserved states and E-kin model output. In addition, we compare these models on mean absoluteerror, root mean square error, normalized root mean square error, and the coefficient of determination(R2), left table of Table 2. When examining the R2, DKL-SKIP outperforms N4SID and SKIP-GP methods by 99%,62% for error εvxand24%,32% for error εvy, respectively. N4SID slightlyoutperforms DKL-SKIP in the εωerror term by a margin of 0.16% in terms of the coefficient ofdetermination. However, it does poorly at predicting the other two error terms. Overall, the resultson real data show DKL-SKIP superior predictive capabilities across the three error terms.7.2 Simulation resultsThe comparison between the models on simulated data is shown in Figure 3b and in the right sideof Table 2. Once again, DKL-SKIP model consistently performs well across all error terms. Thisindicates the effectiveness of DKL-SKIP and its ability to predict the non-linear error terms underhigh-speed situations. In terms of R2, DKL-SKIP outperforms N4SID and SKIP-GP methods by99%,2%for error εvxand63%,34% for error εvy, respectively. The SKIP-GP is able to makeaccurate predictions on error terms εvxandεω, but it is not as accurate in predicting εvy. Although7Errors in long. velocity, εvx(real-world)Method MAE RMSE NRMSE R2N4SID 0.00527 0.00680 0.15956 0.0020SKIP-GP 0.00405 0.00538 0.12662 0.3720DKL-SKIP 0.00103 0.00111 0.02607 0.9733Errors in long. velocity, εvx(Simulation)Method MAE RMSE NRMSE R2N4SID 0.00754 0.00882 0.26283 -2.463SKIP-GP 0.00094 0.00094 0.02801 0.9606DKL-SKIP 0.00057 0.00064 0.01874 0.9823Errors in lateral velocity, εvy(real-world)Method MAE RMSE NRMSE R2N4SID 0.00609 0.00787 0.11236 0.7347SKIP-GP 0.00730 0.00891 0.12733 0.6590DKL-SKIP 0.00172 0.00246 0.03516 0.9739Errors in lateral velocity, εvy(Simulation)Method MAE RMSE NRMSE R2N4SID 0.00985 0.01215 0.20140 0.3230SKIP-GP 0.00816 0.00926 0.15363 0.6141DKL-SKIP 0.00216 0.00301 0.04998 0.9592Errors in yaw rate, εω(real-world)Method MAE RMSE NRMSE R2N4SID 4.96e-6 6.78e-6 0.010001 0.9985SKIP-GP 3.83e-5 4.51e-5 0.06757 0.8837DKL-SKIP 5.77e-6 7.26e-6 0.01088 0.9969Errors in yaw rate, εω(Simulation)Method MAE RMSE NRMSE R2N4SID 2.71e-06 9.47e-06 0.00448 0.9971SKIP-GP 3.91e-05 5.18e-05 0.02456 0.9133DKL-SKIP 1.95e-05 2.9e-05 0.01374 0.9829Table 2: Evaluation of N4SID, SKIP-GP, DKL-SKIP (our method), in real-world setup (left table)and simulation setup (right table): The tables show these models’ performance evaluated in fourmetrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Normalized RootMean Square Error (NRMSE), and Coefficient of Determination ( R2).N4SID marginally performs better on eω, by1%, it again struggles to accurately predict the othererror terms, particularly εvx. DKL-SKIP’s ability to provide accurate predictions under conditionswhen the vehicle operates at high speeds further demonstrates its robustness and effectiveness inlearning the mismatch between Ekin model and observed dynamics for autonomous racing.8 LimitationsThe DKL-SKIP model has shown promise in learning vehicle dynamics in the context of au-tonomous racing, but a few limitations are noteworthy. First, DKL-SKIP still faces computationalchallenges, mainly impacted by the deep learning component, which is computationally expensive,especially for large datasets. In addition, the SKIP-GP has a O(dkn)space complexity which willlimit GPU performance when the number of copies of dataset, k, increases. These factors limit themodel’s performance in real time. In this paper, without any code optimization, DKL-SKIP has aninference time ranging from 20 to40 ms . Since it has not been verified in closed-loop evaluations,and thus, the computing time may limit its suitability for real-time prediction. This will be addressedin future work as we implement DKL-SKIP-based model predictive control for autonomous racing.Second, as a supervised learning method, the predictive accuracy of DKL-SKIP depends on thedistribution of the training data. The model’s performance will be impaired by unseen input datasamples, such as abrupt changes in vehicle speed or direction. Finally, the states vx, vy, ωin theE-kin model are simply approximations. We have not explored different variable settings in theirdifferential equations, which could impact DKL-SKIP prediction ability.9 Conclusion and DiscussionThis paper introduces DKL-SKIP, a scalable GP model combined with the deep kernel learning forlearning the mismatch between an Ekin model and observed dynamics for autonomous racing. Weconduct both real-world and simulation experiments to compare DKL-SKIP against SKIP-GP, andN4SID. In real-world evaluation, in terms of R2, DKL-SKIP outperforms competitors by 99%,62%forεvxand24%,32% forεvy, respectively. DKL-SKIP also outperforms other models on simulateddata by 99%,2%forεvxand63%,34% forεvy, respectively. Besides, DKL-SKIP performs wellon predicting εω, specifically, in terms of R2, archives 0.9969 in real-world data, and 0.9829 insimulated data. Our future work involves implementing model predictive control and evaluation inclosed-loop experiments as well as extending this methodology to multi-agent autonomous racing.8AcknowledgmentsThis material is based upon work supported by the National Science Foundation under Grant No.2046582.ReferencesJ. Betz, H. Zheng, A. Liniger, U. Rosolia, P. Karle, M. Behl, V . Krovi, and R. Mangharam. Au-tonomous vehicles on the edge: A survey on autonomous vehicle racing. IEEE Open Journal ofIntelligent Transportation Systems , 3:458–488, 2022.G. Hartmann, Z. Shiller, and A. Azaria. Autonomous head-to-head racing in the indy autonomouschallenge simulation race. arXiv preprint arXiv:2109.05455 , 2021.V . S. Babu and M. Behl. f1tenth. dev-an open-source ros based f1/10 autonomous racing simulator.In2020 IEEE 16th International Conference on Automation Science and Engineering (CASE) ,pages 1614–1620. IEEE, 2020.J. V . Carrau, A. Liniger, X. Zhang, and J. Lygeros. Efficient implementation of randomized mpc forminiature race cars. In 2016 European Control Conference (ECC) , pages 957–962. IEEE, 2016.M. O’Kelly, V . Sukhil, H. Abbas, J. Harkins, C. Kao, Y . V . Pant, R. Mangharam, D. Agarwal,M. Behl, P. Burgio, et al. F1/10: An open-source autonomous cyber-physical platform. arXivpreprint arXiv:1901.08567 , 2019.A. Wischnewski, M. Geisslinger, J. Betz, T. Betz, F. Fent, A. Heilmeier, L. Hermansdorfer, T. Her-rmann, S. Huch, P. Karle, et al. Indy autonomous challenge-autonomous race cars at the handlinglimits. In 12th International Munich Chassis Symposium 2021 , pages 163–182. Springer, 2022.C. Jung, A. Finazzi, H. Seong, D. Lee, S. Lee, B. Kim, G. Gang, S. Han, and D. H. Shim. Anautonomous system for head-to-head race: Design, implementation and analysis; team kaist atthe indy autonomous challenge. arXiv preprint arXiv:2303.09463 , 2023.M. Althoff, M. Koschi, and S. Manzinger. Commonroad: Composable benchmarks for motionplanning on roads. In 2017 IEEE Intelligent Vehicles Symposium (IV) , pages 719–726. IEEE,2017.H. Pacejka. Tire and vehicle dynamics . Elsevier, 2005.Y . Xing, C. Lv, H. Wang, D. Cao, and E. Velenis. An ensemble deep learning approach for driverlane change intention inference. Transportation Research Part C: Emerging Technologies , 115:102615, 2020.L. Hermansdorfer, R. Trauth, J. Betz, and M. Lienkamp. End-to-end neural network for vehicledynamics modeling. In 2020 6th IEEE Congress on Information Science and Technology (CiSt) ,pages 407–412, 2020. doi:10.1109/CiSt49399.2021.9357196.B. Van Niekerk, A. Damianou, and B. Rosman. Online constrained model-based reinforcementlearning. Conf. Uncertainty in Artificial Intelligence , 2017.A. Jain, M. O’Kelly, P. Chaudhari, and M. Morari. Bayesrace: Learning to race autonomously usingprior experience. arXiv preprint arXiv:2005.04755 , 2020.J. Kabzan, L. Hewing, A. Liniger, and M. N. Zeilinger. Learning-based model predictive control forautonomous racing. IEEE Robotics and Automation Letters , 4(4):3363–3370, 2019.L. Hewing, A. Liniger, and M. N. Zeilinger. Cautious nmpc with gaussian process dynamics forautonomous miniature race cars. In 2018 European Control Conference (ECC) , pages 1341–1348. IEEE, 2018.9J. Ning and M. Behl. Vehicle dynamics modeling for autonomous racing using gaussian processes.arXiv preprint arXiv:2306.03405 , 2023.A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep kernel learning. In Artificial intelli-gence and statistics , pages 370–378. PMLR, 2016.J. Bradshaw, A. G. d. G. Matthews, and Z. Ghahramani. Adversarial examples, uncertainty,and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprintarXiv:1707.02476 , 2017.J. Lee, J. Feng, M. Humt, M. G. M ̈uller, and R. Triebel. Trust your robots! predictive uncertaintyestimation of neural networks with sparse gaussian processes. In Conference on Robot Learning ,pages 1168–1179. PMLR, 2022.R. Rajamani. Vehicle dynamics and control . Springer Science & Business Media, 2011.J. Hwan Jeon, S. Karaman, and E. Frazzoli. Anytime computation of time-optimal off-road vehiclemaneuvers using the rrt. In 2011 50th IEEE Conference on Decision and Control and EuropeanControl Conference , pages 3276–3282. IEEE, 2011.C. E. Rasmussen. Gaussian processes in machine learning. In Summer school on machine learning ,pages 63–71. Springer, 2003.J. Gardner, G. Pleiss, R. Wu, K. Weinberger, and A. Wilson. Product kernel interpolation for scalablegaussian processes. In International Conference on Artificial Intelligence and Statistics , pages1407–1416. PMLR, 2018.A. Wilson and H. Nickisch. Kernel interpolation for scalable structured gaussian processes (kiss-gp).InInternational conference on machine learning , pages 1775–1784. PMLR, 2015.Autonoma(2023). Autoverse simulator of autonoma lab, 2023. URL https://autonomalabs.com/.Accessed: May 5, 2023.P. Van Overschee and B. De Moor. N4sid: Subspace algorithms for the identification of combineddeterministic-stochastic systems. Automatica , 30(1):75–93, 1994.10In this supplementary material, we present a generalized overview of the N4SID system identifica-tion technique. In addition, we include the details of the ablation studies that were undertaken in thedevelopment of DKL-SKIP.A N4SID System Identification AlgorithmIn this paper, we implement an alternative technique known as N4SID, a data-driven approachdesigned for system identification (Van Overschee and De Moor, 1994). This method utilizes asubspace-based strategy, which segregates the data into deterministic and stochastic elements byprojecting them onto distinct orthogonal subspaces. The algorithm calculates the system’s state se-quence, state-transition matrid, input matrid, and output matrix from these subspaces, offering aconcise and effective depiction of the inherent dynamical system. Essentially, N4SID is capable ofdirectly approximating system behavior using input-output data in a state-space format. A descrip-tion of the algorithm is shown below:Algorithm 1 N4SID Algorithm1:procedure N4SID2: Normalize input-output data.3: Estimate a covariance matrix by applying QR decomposition.4: Compute a singular value decomposition (SVD) of the covariance matrix.5: Divide the SVD output into observable and unobservable subspaces.6: From the observable subspace, compute the system matrices A,B,C, and D.7: Perform the balancing transformation and reduction of the state-space model.8: Return the state-space model.9:end procedureIn this paper, considering the collected input-output data set u(k), y(k)Nk=1, with u(k)∈Rmrepre-senting the input data sample, {vxk, vyk, ψk, δk, ωk, axk,∆δk, uT k, uBk}, and y(k)∈Rpdenotingthe output vector, εvxk, εvyk, εωk, at time k, N4SID can effectively approximate the system’s dynam-ics using a state-space representation, as demonstrated in 8.x(t+ 1) = Ax(k) +Bu(k)y(t) =Cx(k) +Du(k)(8)B DKL-SKIP setupB.1 Architecture of neural networkThe architecture of our DNN was inspired by the work on Deep Kernel Learning by Wilson et al.(2016). While we adopted their layered approach, we made modifications to make it suitable for ourtasks.One key adaptation is the configuration of the first layer, which comprises nine neurons, mirroringthe nine input features in our dataset. The output layer dimensionality is decided through experimen-tation. Through our tests, we identified an optimal configuration for the output layer that balancesmaintaining prediction accuracy and simplifying the computational task. Specifically, we reducedthe output dimensionality to four neurons. This design choice aids the SKIP-GP model in producingmore accurate predictions by reducing the complexity it needs to handle. This is also the reason ourDKL-SKIP performs better than SKIP-GP in terms of prediction accuracy.The number of hidden layers (network capacity) was decided based on the experiment. Below, wecompare the R2 score as model performance across different architectures, and the results are shownin Table 3. As shown in the results, it is clear to see that as the capacity of the DNN is augmented,there’s a typical trend of enhancement in our model’s performance. This improvement is attributed11Architecture R2 scores ( vx)R2 scores ( vy)R2 scores ( ω)9→50→4 0.96095 0.95242 0.894719→300→50→4 0.97435 0.96716 0.993699→800→300→50→4 0.97489 0.96347 0.995229→1200→800→300→50→4 0.95730 0.93647 0.91790Table 3: R2 scores for different architectures. As DNN capacity increases, performance generallyimproves, but it can decline if capacity becomes excessive.Figure 4: Performance comparison across different learning rates (0.001 to 0.035) with the optimalR2 score of 0.9806 achieved at a rate of 0.02.to the increased ability of the DKL to capture complex patterns and relationships in the dynamicsdata. However, it’s essential to strike a balance. If the DNN’s capacity becomes overly large, it maystart to overfit the training data, which eventually degrades the overall performance of our model.In addition, we also carried out an ablation study to determine the optimal learning rate for tuningthe DNN and the Gaussian Process (GP) kernel function. The results are shown in Figure 4. In ourstudy, we executed a range of experiments, testing learning rates between 0.001 and 0.035. We usedthe R2 score as our benchmark for model performance. Our findings revealed that a learning rate of0.02 yielded the highest performance, achieving an R2 score of 0.9806. Moreover, this rate ensuredconsistent stability during training.B.2 Input data featuresIn this paper, the DKL-SKIP model was not exposed to the testing data during the training process.The training and testing data were collected during distinct runs on the racetrack. The variations inthrottle/braking and steering between these datasets resulted in differing vehicle dynamics. To offera clearer understanding, we’ve depicted each input data set in Figure 5.The input features of our data sets consisting of {vx, vy, ψ, δ, ω, a x,∆δ, uT, uB}, representing lon-gitudinal and lateral velocities, vehicle heading, yaw rate, steering angle, longitudinal acceleration,steering velocity, throttle command, and braking pressure, respectively.These features serve as the initial input for the DKL feature extractor. Within the DKL framework,a series of processes occur: First, the dimensionality of this input data is significantly reduced tosimplify the data structure. This condensed data is then transformed, mapping it to a more abstract12Figure 5: Distinct racetrack runs for training and testing data collection, highlighting differences invehicle dynamics due to variations in throttle/braking and steering.feature space. Out of this transformed data, DKL identifies and selects the top four most significantand representative features. These features are then forwarded as the input for the GP model, whichis responsible for making the final target predictions.13 |
oyWkrG-LD5 | Geometry Matching for Multi-Embodiment GraspingMaria Attarian1,2,Muhammad Adil Asif2,Jingzhou Liu2,Ruthrash Hari2,Animesh Garg2,3,Igor Gilitschenski2,Jonathan Tompson11Google DeepMind,2University of Toronto,3Georgia Institute of TechnologyFigure 1: GeoMatch : Our method enables multi-embodiment grasping by conditioning the graspselection on end-effector and object geometry.Abstract: Many existing learning-based grasping approaches concentrate on asingle embodiment, provide limited generalization to higher DoF end-effectorsand cannot capture a diverse set of grasp modes. We tackle the problem of grasp-ing using multiple embodiments by learning rich geometric representations forboth objects and end-effectors using Graph Neural Networks. Our novel method-GeoMatch - applies supervised learning on grasping data from multiple em-bodiments, learning end-to-end contact point likelihood maps as well as condi-tional autoregressive predictions of grasps keypoint-by-keypoint. We compareour method against baselines that support multiple embodiments. Our approachperforms better across three end-effectors, while also producing diverse grasps.Examples, including real robot demos, can be found at geo-match.github.io .Keywords: Multi-Embodiment, Dexterous Grasping, Graph Neural Networks1 IntroductionDexterous grasping remains an open and important problem for robotics manipulation. Many taskswhere robots are involved boil down to some form of interaction with objects in their environment.This requires grasping objects with all kinds of different geometries. In addition, the large varietyof robot and end-effector types necessitates that grasping should also be achievable with new andarbitrary end-effector geometries. However, the cross-embodiment gap between grippers does notpermit simply applying grasping policies from one end-effector to another, while domain adaptationi.e. “translating” actions from one embodiment to another, is also not straightforward. In compari-son, humans are extremely versatile: they can adapt the way they grasp objects based on what theyknow about object geometry even if the object is new to them, and they can do this in multiple ways.There has been much research in grasping thus far, with many works focusing on one embodimentat a time [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] and fewer looking at the multi-embodiment problem [12,13, 14]. Methods are divided between hand-agnostic or hand-aware, and experiment with differentrepresentations for grasping, such as contact maps [12], contact points [13] or even root pose andCorrespondence emails: jmattarian@google.com, adil.asif@mail.utoronto.ca7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.z-offset [14]. Existing multi-embodiment approaches either require explicit representation of jointlimits that becomes exponentially harder in higher DoF end-effectors, or expect heavy manual workto adapt to new end-effectors, or showcase mixed rates of success across different embodiments withsome gripper/hand morphologies performing significantly better than others.Drawing inspiration from how humans adapt their grasps easily and successfully based on priorsthey have learned about 3D geometry of both objects in space and their own hands, we proposeendowing robotic agents with a similar sense of geometry via a generalized geometry embeddingthat is able to represent both objects and end-effectors. This embedding can be used to predict graspsthat demonstrate stability, diversity, generalizability, and robustness. More specifically, we proposeto rely on Graph Neural Networks (GNN) to encode meaningful geometry representations and usethem to predict keypoint contacts on the object surface in an autoregressive manner.In summary, our contributions are as follows:a) We propose formulating robot grasp learning as a geometry matching problem through learningcorrespondences between geometry features, where both, end-effector and object geometries, areencoded in rich embedding spaces.b) To solve the above problem, we introduce GeoMatch, a novel method for learning expressivegeometric embeddings and keypoint contacts.c) We demonstrate that our method is competitive against baselines without any extra requirementsto support higher DoF end-effectors and while also showcasing high performance across multipleembodiments. Finally, we provide demos of our method on a real Franka Emika with an Allegrohand.2 Related WorkDexterous Grasping. Many dexterous grasping works do not look into multi-embodiment and in-stead focus on diversity of objects using a single end-effector. Many grasping methods support2-finger parallel grippers [2, 3, 4, 5, 6, 7, 8, 15] with several others looking into high-DoF dexterousmanipulation [9, 10, 1, 16, 17, 18]. Some work has also been conducted towards multi-embodimentgrasping. Several of those address the problem from the differentiable simulation grasp synthesispoint of view [19, 11, 20]. GenDexGrasp [12] advocate for hand-agnostic contact maps generated bya trained cV AE [21] with a newly introduced align distance, and optimize matching of end-effectorsagainst produced contact maps via a specialized Adam optimization method. This is the most recentwork to our knowledge, attempting to tackle multi-embodiment grasping without significant prepro-cessing to support higher DoF grippers. In contrast, we choose to operate on hand-specific contactmaps as we are interested in learning both object and embodiment geometry conditioned grasps, andempirically found our method to perform more evenly well multi-embodiments.Intuitively, our work is closest to UniGrasp [13]. UniGrasp operates on object and end-effectorpoint clouds to extract features and ultimately output contact points which are then fed into anInverse Kinematics (IK) solver, similarly to us. Their encoder of choice is PointNet++ and thecontact prediction is done through a Point Set Selection Network (PSSN) [22]. Their proposedarchitecture adds one stage per finger, which means supporting more than 3 finger grippers requiresmanually adding another stage. As a result, adapting the method to more than 2-finger and 3-finger grippers requires significant work. Moreover, the method requires explicit representationof boundary configurations through combination of minimum and maximum values for each jointangle which explodes exponentially on higher DoF end-effectors. In contrast, we rely on learnedgeometry features to identify viable configurations as opposed to explicitly encoding them throughjoint limit representation. Our method only requires a small number of user-selected keypoints perend-effector, same number for all end-effectors. This disentangles the dependency between numberof fingers and applicability of our method. Similarly to UniGrasp, EfficientGrasp [23] also usesPointNet++ and a PSSN model for contact point prediction. TAX-Pose [24] is another recent workthat shares some high level concepts. Instead of encoding the end-effector, authors consider tasksinvolving objects that interact with each other in a particular way, and what the relative pose ofinteracting objects should be for the task to be considered successful. They proceed with encoderobjects or object parts using DGCNN and learn a cross-attention model that predicts relative posesof objects that accomplish a task. AdaGrasp [14] uses 3D Convolutional Neural Networks to learna scoring function for possible generated grasps, and finally executes the best one.2Finally, many of the methods mentioned, rely on deterministic solvers which can result in decreaseddiversity of generated grasps. While we also rely on a deterministic solver, we address this issueby leveraging the scoring we obtain by the learned full unnormalized distribution of contacts. Thescore is used to select a first keypoint that will guide the remaining contact point prediction. Thispermits higher diversity without having to sample a large number of grasps.Graph Neural Networks. Graph Neural Networks were first introduced by Scarselli et al. [25] asa proposed framework to operate on structured graph data. Since then, many advancements havebeen made towards extending their capabilities and expressivity [26]. Specifically in the graspingliterature, there have been multiple applications of GNN. E.g. Huang et al. [27] propose learning aGNN to predict 3D stress and deformation fields based on using the finite element method for graspsimulations. The use of GNNs for end-effector parameterization has been proposed before in [28]where tactile sensor data is fed into a GNN to represent the end-effector as part of grasp stabilityprediction. Lou et al. [29] leverage GNN to represent the spacial relation between objects in a sceneand suggest optimal 6-DoF grasping poses. Unlike previous methods, we propose applying GNNas a general geometry representation for any rigid body, both objects and end-effectors jointly. Forthe purposes of this work, we leverage the GNN implementation by [30] due to the readily availableand easily adaptable code base.Geometry-Aware Grasping. Several works have emphasized the role of geometry in the graspingproblem. Some works study geometry-aware grasping under the light of shape completion [31, 32].Yan et al. [33] encodes RGBD input via generative 3D shape modeling and 3D reconstruction.Subsequently, based on this learned geometry-aware representation grasping outcomes are predictedwith solutions coming out of an analysis-by-synthesis optimization. In the same vein, Van et al. [34]proposed leveraging learned 3D reconstruction as a means of understanding geometry, and furtherrely on this for grasp success classification as an auxiliary objective function for grasp optimizationand boundary condition checking. Bohg et al. [35] introduced a classifier trained on labeled imagesto predict grasps via shape context. Finally, Jiang et al. [6] propose to learn grasp affordances and3D reconstruction as an auxiliary task, using implicit functions. Unlike these works, we suggestlooking at geometry itself directly from 3D as a feature representation without imposing any 3Dreconstruction constraints.3 MethodIn this work, we aim to learn robust and performant grasping prediction from the geometries ofboth the objectGOand the end-effector GG. Our approach models the contact between pre-definedkeypoints on the end-effector and contact points on the object surface. That is, we aim to learn amodelc1;:::;cN=GeoMatchGO;GG;(ki)Ni=0;whereci2R3are the predicted contact pointson the object surface each of which corresponds to one of Npre-defined keypoints ki2R3on theend-effector.Method Overview. We represent object and end-effector geometries using graphs GO= (VO;EO),andGG= (VG;EG), which can easily be generated from object point clouds and a URDF descrip-tion respectively (Sec. 3.1). This formulation allows us to utilize GNNs to learn geometry-awareembeddings which are subsequently fed into an autoregressive grasp generation module (Sec. 3.2).The losses are used to supervise the graph embeddings in addition to supervising the output of theautoregressive module (Sec. 3.3). These losses require likelihood maps that can directly be obtainedfrom the dataset (Sec. 3.4). At test time we sample the first keypoint randomly and execute the graspusing inverse kinematics (Sec. 3.5).3.1 Object and End-effector RepresentationsTo create the graphs, we generate a point cloud representing each end-effector in canonical pose.As a result, we have point cloud representations of both, the objects and the end-effectors. Thosecan directly be converted into graphs (described below). For the end-effector, we use the resultinggraphs to manually select the canonical keypoints.End-effector representation. We set the end-effector to a canonical rest pose qrest. We chose therest pose to be a vector with all joint angles in middle range of their respective joint limits, zero root3Figure 2: Object and end-effector inputs. Objects are initially represented as regularly sampledpoint clouds which are converted into a graph for further processing. End-effector geometries aregiven as meshes and converted to coarser graphs by randomly sampling points from the mesh as anintermediate step. User-selected keypoints are highlighted in red.translation, and identity root rotation. In this pose we convert the end-effector mesh into a pointcloud by sampling SG= 1000 points from the mesh surface.Graph Creation. The vertices of the graphs are obtained by simply considering each point in thepoint cloud a vertex. Similarly to the end-effectors, the number of points per object is the same foreach object, in this work it is SO= 2048 . The edges are meant to capture local surface geometry andare created by connecting each point to its Kclosest neighbours, a hyper-parameter that depends onpoint cloud density and object structure (we empirically chose K= 8).Canonical Contact Points. The canonical contact points ki2VG(i2f1:::Ng) on the surfaceof each end-effector are selected visually. This needs only to be done once for each end-effector ina dataset ensuring that the selected keypoints have good coverage of each gripper with respect to itsmorphology and its grasping behavior. Please note that the order in which keypoints are selected isa hyper-parameter and might impact model performance. In our case, keypoint order was chosen tobe as semantically consistent as much as possible across multiple embodiments (e.g. fingertips areordered left to right, etc).Dataset. For the purposes of this work, we use a subset of the MultiDex dataset introduced by [12]and used by them to train the CMap-V AE model of their approach. The dataset is comprised of 5end-effectors - one 2-finger, two 3-finger, one 4-finger, and one 5-finger. 58 common objects fromYCB [36] and ContactDB [37] are used. The dataset finally contains 50,802 diverse grasps overthe set of hands and objects, each represented by an object name, an end-effector name, and theend-effector pre-grasp pose in the form:qrest= (t;R; 0;:::;N1)rest; (1)wheret2R3is the root translation, R2R6is the root rotation in continuous 6D representation asintroduced in [38], and 0;:::;N1are the joint angles.3.2 ArchitectureIdeally, we would like to compute the full joint distribution of vertex-to-vertex contact, i.e. P(vo;vg)for all object & end-effector vertices vo2VO,vg2VG. This is typically intractable. We reduce thecomplexity by only focusing on Nlandmark keypoints k0;:::;kNon the end-effector. We, thus,try to approximate P(vo;c0;:::;cN)whereci;i2[0;N)are the vertices on the object where the kikeypoint makes contact, through learning a set of factorizations by applying the Bayes rule resultinginP(vo;c0;:::;cN) = Ni=1PMi(vo;cijc0;:::;ci1) = Ni=1PMi(vo;cijc<i)); (2)wherevo;ci2VO,(k0;:::;kN)VG, andPMi(vo;cijc0;:::;ci1)are the factorized marginals tobe learned in an autoregressive manner.We seek an architecture that can embed local geometry information well. From the architecturechoices that demonstrate such properties, we chose Graph Neural Networks (GNN) [30] to create4(a) Full overview of GeoMatch. (b) Autoregressive modules.Figure 3: GeoMatch architecture. The object and gripper graphs are passed through the twoencoders followed by linear layers. The gripper keypoint embeddings are gathered and are passedas input along with the object embeddings in the autoregressive modules.object and end-effector embeddings. We, subsequently, gather the embeddings corresponding to thekeypoints on the end-effector resulting in^VO=EO(GO); ^VG=EG(GG); ^VG;N=Gather^VG;(ki)Ni=0:The gathered embeddings are passed into our autoregressive matching module jointly with the fullobject and end-effector embeddings, i.e.c0;:::;cN=AutoregressiveMatching (^VO;^VG;N):Geometry Processing. For our experiments, we used the Graph Convolutional Networks (GCN)implementation by Kipf et al. [30] with 3 hidden layers of size 256 and 512 output embeddingdimension, one for objects and one for end-effectors. The subsequent linear projections for eachencoderLOandLGrespectively, were of size 64 without bias.Autoregressive Matching. The downprojected geometric embeddings are passed into an autore-gressive matching module, consisting of 5 layers, Mi. Each layer is responsible for predicting theindex of the object vertex cnwhere keypoint knmakes contact, given outputs of previous keypointsk0, ...,kn1. This is done by concatenating the full geometric embeddings with vertex-keypointdistances. These are used as input to a vertex classifier that selects the resulting contact points cnonthe object. Details are given in appendix A.1.3.3 LossesOur full training objective contains two components. One focusing on the geometric embeddingsand another on the predicted contact points resulting in the overall lossLtotal=LPF0;:::;N+LPM0;:::;N(3)with each component having equal weight in our experiments ( == 0:5).Geometric Embedding Loss LPF0;:::;N.The unnormalized likelihood map of object-keypoint con-tacts intuitively represents a score that a given object vertex is in contact with a given gripper key-point and is given byPFi(vo;ci) =EO(vo)EG(vg)[ki]: (4)This is optimized against the dot product of the hand-specific object contact map CO(vo;ki)whichwill be explained further below, via a binary cross-entropy lossLPF0;:::;N= Ni=1BCEa(PFi(vo;ci);CO(vo;ki)); (5)whereais the positive weight used to address the class imbalance (we use a= 500 ).Predicted Contact Loss LPM0;:::;N.This loss models the joint distribution of contacts via a set offactorizations PMi(vo;cijc0;:::;ci1)8i2[0;N)each of which are represented by the outputs ofthe autoregressive layers in our network. The outputs are optimized against the ground truth binarycontact map label of the i-th gripper keypoint, contributing to a second binary cross-entropy losstermLPM0;:::;N= Ni=1BCEb(PMi(vo;cijc0;:::;ci1);CO(vo;ki)); (6)5where, similarly, bis the positive weight hyperparameter used to address the class imbalance (weuseb= 200 ). A visual representation of the autoregressive layers can be seen in Fig. 3b. Note thatfori= 0,PF0(vo;c0)constitutes the first marginal for k0and thus:PF0(vo;c0) =PM0(vo;c0).3.4 Likelihood MapsIn order to learn the above, we assumed access to ground truth likelihood maps used for supervisedlearning which we obtain as follows. For each grasp in our dataset, instead of an object contact map,we generate a (2048;N)per-gripper-keypoint proximity map of Mnearest neighbors (NN) to eachof the contact vertices for the canonical keypoints:Proxo(vo;ki) =(1; vo2NN(VG(ki);M)0; otherwise:(7)whereNN(VG(ki);M) =fy2VO:jfz2VO:jjzVG(ki)jj<jjyzjjgj< Mg. Wealso generate a gripper contact map for the selected keypoints where the contacts are defined as thekeypoints closer than a given threshold, to the object point cloud:Cg(ki) =(1;9vo;jjVOVG(ki)jj2<threshold;0; otherwise:(8)whereProxo(vo;ki)is the object proximity map, and Cg(ki)is the gripper contact map. For thiswork, we empirically assumed M= 20 and a threshold of 0.04. Finally, the hand-specific objectcontact map can be obtained as CO(vo;ki) =Proxo(vo;ki)Cg(ki).3.5 Grasp Execution at InferenceAt test time, the independent unnormalized distribution for k= 0,PF0(vo;c0), is leveraged tosample keypoint 0 which will commence the autoregressive inference. Details of this process areprovided in Appendix A.2. Moreover, it should be noted that this autoregressive representation doespresent some limitations. More specifically, the ordering with which the keypoint contacts are beinglearned and ultimately selected could change the result. However, we refrain from experimentingwith all possible combinations of keypoint ordering in the context of this work.The end-effector joint angles are then inferred by feeding the predicted contact points into an In-verse Kinematics (IK) solver. For our purposes, we used SciPy’s Trust Region Reflective algo-rithm (TRF) [39]. The initial pose given to IK is a heuristic pose calculated by applying a rota-tion/translation that aligns the palm with the closest object vertex while keeping all non-root jointsat their rest pose configuration.4 ExperimentsWe evaluate our method through the lens of a number of research questions.Q1: How successful is the model at producing stable and diverse grasps for various embodi-ments? We train our method with a training set containing samples of 5 end-effectors and 38 objects.We then generate grasps on each of the end-effectors and 10 new unseen objects. We evaluate theseusing the evaluation protocol introduced by [12] which considers 3 out of the 5 end-effectors andtests for grasp stability in Isaac Gym [40]. Similarly to [12], we apply a consistent 0:5ms2accel-eration on the object from all xyz directions sequentially, for 1 second each. If the object movesmore than 2cmafter every such application, the grasp is declared as a failure. We also follow thesame contact-aware refinement procedure, which applies force closure via a single step of Adamwith step size 0.05. In addition, we provide calculated diversity as the standard deviation of thejoint angles of all successful grasps, comparably to [12]. There is a limited number of methodsthat tackle grasping across multiple embodiments. For our comparisons, for the recent methods, wechose GenDexGrasp [12], which assumes hand-agnostic contact maps, AdaGrasp (initOnly as it isthe closest setup to our task) [14] which assumes table top grasping only and parameterizes graspsas a pick location and z-axis rotation, and finally DFC [19] which is a differentiable force closuresynthesis method. In summary, we selected a set of methods that look at the cross-embodimentgrasping problem through various different lenses.6Figure 4: Qualitative results. Generated grasps using GeoMatch on unseen objects with ezgripper,barrett, robotiq-3finger, allegro and shadowhand. For each grasp, another perspective is includedwhere the GeoMatch predicted keypoints on each object are marked with purple and the gripperuser selected keypoints matching these, are marked with yellow.Results can be found in Tab. 1. In addition, we provide a number of qualitative results in Fig. 4.More qualitative results in the form of rendered grasps can be seen in Fig. 1.MethodSuccess (%)" Diversity (rad)"ezgripper barrett shadowhand Mean ezgripper barrett shadowhandDFC [19] 58.81 85.48 72.86 72.38 0.3095 0.3770 0.3472AdaGrasp [14] 60.0 80.0 - 70.0 0.0003 0.0002 -GenDexGrasp [12] 43.44 71.72 77.03 64.01 0.238 0.248 0.211GeoMatch (Ours) 75.0 90.0 72.5 79.17 0.188 0.249 0.205Table 1: Success and diversity comparisons. GeoMatch performs more consistently across end-effectors with a varied DoF number while maintaining diversity of grasp configurations. Successrates are provided per end-effector where the mean is calculated across the 3 end-effectors presented,for ease of review.In our experiments, we observed that GeoMatch is performing slightly worse (-2%) on the 5-fingergripper Shadowhand than the best performing baseline, however performance for the 2-finger and3-finger grippers increases by 5-30% compared to other methods. Diversity remains competitive toother methods. Overall, the minimum performance observed for GeoMatch is significantly higherthan baselines and the average performance multi-embodiments beats all baselines we comparedagainst.Q2: Is the multi-embodiment model performing better than a model trained on individual em-bodiments? We hypothesize that training our method on data containing a variety of end-effectorswill result in learning better geometry representations. To investigate this, we train our method oneach single embodiment separately by filtering our dataset for each given end-effector. We thencompare against the multi-embodiment model. Each of the single end-effector models is trainedonly on grasp instances of that gripper while the multi-embodiment model is trained on all 5 end-effectors and objects in the training set. The validation set in all cases contains 10 unseen objects.We provide results in Tab. 2. The model trained on multi-embodiment data is performing 20%-35%better than single end-effector models. This advocates for the value of multi-embodiment graspingpolicies as opposed to single model policies trained on more data.MethodSuccess (%)" Diversity (rad)"ezgripper barrett shadowhand ezgripper barrett shadowhandSingle embodiment 40.0 70.0 40.0 0.157 0.175 0.154Multi embodiment 75.0 90.0 72.5 0.188 0.249 0.205Table 2: Comparisons between the Multi-embodiment model and models trained on individual grip-pers.Q3: How robust is the learned model under relaxed assumptions? While our method demon-strates compelling results, it has been trained on full point clouds. Acknowledging that this is often7a strict assumption, especially when considering real-world environments, we evaluate robustnessof the approach under conditions more similar to real-world robotic data. We experiment with graspgeneration using: a) noisy point clouds, b) partial point clouds, and c) partial point clouds includingnoise. For each of these, we perturbed the object point clouds accordingly, and collected graspsusing our method zero-shot . Success rate was 77.5% ,66.7% , and 67.5% across end-effectors foreach type of augmentation respectively. As demonstrated, our method shows reasonable robustness.Experiment details and a breakdown of numbers can be found at Appendix A.Q4: How important are various components of the design? Finally, we investigate the designdecisions of our approach and how they affect performance. More specifically, we perform twoablations:PointNet++ as the encoder of choice instead of GNN. We evaluate our choice towards GNN byswapping out the two GNN encoders with PointNet++[22], a popular encoder architecture for pointclouds. Our results show that GNN was indeed a good choice as it performs better than the Point-Net++ ablation, by 10% averaging across end-effectors. In addition, we empirically observed a12x slow down when using PointNet++ due to the difference in the number of model parameters,which also makes GNN more light weight and fast. A breakdown per end-effector can be found inAppendix A.Non-shared weights between keypoint encoders. We hypothesize that a shared encoder among allend-effectors is beneficial for learning features that represent local geometry and this subsequently,informs autoregressive prediction of keypoints. To validate this hypothesis, we conducted an abla-tion where we separated the end-effector encoder to 6 separate identical encoders, one per keypoint.Our main model with shared weights across all end-effectors and keypoints outperforms the splitencoders by 9%. Further analysis per end-effector can be found in Appendix A.5 LimitationsOur method showcased that grasp learning can benefit from multi-embodiment data in terms of gen-eralization to new objects as well as robustness. Obtaining large amounts of such multi-embodimentgrasping data, especially in real world setups can be challenging, time consuming and expensive.However, given that a single embodiment grasping policy was shown to require more data to per-form comparably, we argue that spending resources on a multi-embodiment dataset to yield a policythat performs well across a variety of grippers is a better choice. On another note, as our focus wason the notion of cross-embodiment and if a unified grasping policy is achievable, the point cloudsused for this work were complete or slightly noisy/partial models. This does not reflect the distribu-tion of noise encountered in real world point clouds derived from depth camera data. Grasps beingdirected by a small set of keypoints could be viewed as another limitation. This design choice mayindeed limit the areas of a gripper that make contact with an object or maybe prevent predictionof power grasps. This could perhaps be mitigated by choosing a larger set of keypoints for bettercoverage of the end-effector surface. Lastly, our method relies on the robustness of the IK solution.We empirically observed cases where there was a reasonable grasp solution for a set of predictedkeypoints, however the chosen IK solution reached the maximum iteration steps and terminated insome suboptimal configuration.6 ConclusionThis work presented a novel multi-embodiment grasping method that leverages GNN to learn pow-erful geometry features for object and embodiment representation. Our approach demonstrates that ajoint encoder trained on multiple embodiments can better embed geometry in a generalizable fashionand ultimately result in higher grasping success rate on unseen objects. The proposed frameworkalso showcased robustness to more realistic point cloud inputs. Diversity of generated grasps re-mains competitive while producing such diverse grasps is as simple as conditioning with a differenthigh likelihood starting contact point for the first keypoint. Code and models will be released.8AcknowledgmentsAuthors would like to thank Claas V oelcker for valuable feedback and discussion, as well as SilviaSell ́an and Alec Jacobson for providing invaluable resources on Blender visualizations. Finally,we’d like to thank reviewers for constructive feedback that resulted in improving our work.References[1] A. Nguyen, D. Kanoulas, D. G. Caldwell, and N. G. Tsagarakis. Detecting Object Affordanceswith Convolutional Neural Networks. In International Conference on Intelligent Robots andSystems (IROS) , 2016.[2] H.-S. Fang, C. Wang, M. Gou, and C. Lu. GraspNet-1Billion: A Large-Scale Benchmark forGeneral Object Grasping. In Conference on Computer Vision and Pattern Recognition (CVPR) ,2020.[3] A. Mousavian, C. Eppner, and D. Fox. 6-DOF GraspNet: Variational Grasp Generation forObject Manipulation. In International Conference on Computer Vision (ICCV) , 2019.[4] A. ten Pas, M. Gualtieri, K. Saenko, and R. Platt. Grasp Pose Detection in Point Clouds. InThe International Journal of Robotics Research , 2017.[5] M. Gualtieri, A. ten Pas, K. Saenko, and R. Platt. High Precision Grasp Pose Detection inDense Clutter. In International Conference on Intelligent Robots and Systems (IROS) , 2016.[6] Z. Jiang, Y . Zhu, M. Svetlik, K. Fang, and Y . Zhu. Synergies Between Affordance and Geom-etry: 6-DoF Grasp Detection via Implicit Representations. In Robotics: Science and Systems(RSS) , 2021.[7] S. Wang, Z. Zhou, and Z. Kan. When Transformer Meets Robotic Grasping: Exploits Contextfor Efficient Grasp Detection. In Robotics and Automation Letters , 2022.[8] C. Wu, J. Chen, Q. Cao, J. Zhang, Y . Tai, L. Sun, and K. Jia. Grasp Proposal Networks:An End-to-End Solution for Visual Learning of Robotic Grasps. In Conference on NeuralInformation Processing Systems (NeurIPS) , 2020.[9] Y . Xu, W. Wan, J. Zhang, H. Liu, Z. Shan, H. Shen, R. Wang, H. Geng, Y . Weng, J. Chen,et al. UniDexGrasp: Universal Robotic Dexterous Grasping via Learning Diverse ProposalGeneration and Goal-Conditioned Policy. In Proceedings of the Conference on ComputerVision and Pattern Recognition (CVPR) , 2023.[10] W. Wan, H. Geng, Y . Liu, Z. Shan, Y . Yang, L. Yi, and H. Wang. UniDexGrasp++: ImprovingDexterous Grasping Policy Learning via Geometry-aware Curriculum and Iterative Generalist-Specialist Learning. arXiv preprint arXiv:2304.00464 , 2023.[11] D. Turpin, L. Wang, E. Heiden, Y .-C. Chen, M. Macklin, S. Tsogkas, S. Dickinson, andA. Garg. Grasp’D: Differentiable Contact-rich Grasp Synthesis for Multi-fingered Hands. InProceedings of the European Conference on Computer Vision (ECCV) , 2022.[12] P. Li, T. Liu, Y . Li, Y . Geng, Y . Zhu, Y . Yang, and S. Huang. GenDexGrasp: GeneralizableDexterous Grasping. In International Conference on Robotics and Automation (ICRA) , 2023.[13] L. Shao, F. Ferreira, M. Jorda, V . Nambiar, J. Luo, E. Solowjow, J. A. Ojea, O. Khatib, andJ. Bohg. UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands. InRobotics and Automation Letters , 2020.[14] Z. Xu, B. Qi, S. Agrawal, and S. Song. AdaGrasp: Learning an Adaptive Gripper-AwareGrasping Policy. In International Conference on Robotics and Automation (ICRA) , 2021.[15] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. Contact-GraspNet: Efficient 6-DoFGrasp Generation in Cluttered Scenes. In International Conference on Robotics and Automa-tion (ICRA) , 2021.9[16] S. Brahmbhatt, A. Handa, J. Hays, and D. Fox. ContactGrasp: Functional Multi-finger GraspSynthesis from Contact. In International Conference on Intelligent Robots and Systems (IROS) ,2019.[17] J. Varley, J. Weisz, J. Weiss, and P. Allen. Generating Multi-Fingered Robotic Grasps via DeepLearning. In International Conference on Intelligent Robots and Systems (IROS) , 2015.[18] J. Lundell, E. Corona, T. N. Le, F. Verdoja, P. Weinzaepfel, G. Rogez, F. Moreno-Noguer, andV . Kyrki. Multi-FinGAN: Generative Coarse-To-Fine Sampling of Multi-Finger Grasps. InInternational Conference on Robotics and Automation (ICRA) , 2021.[19] T. Liu, Z. Liu, Z. Jiao, Y . Zhu, and S.-C. Zhu. Synthesizing Diverse and Physically Sta-ble Grasps with Arbitrary Hand Structures using Differentiable Force Closure Estimator. InRobotics and Automation Letters , 2021.[20] D. Turpin, T. Zhong, S. Zhang, G. Zhu, E. Heiden, M. Macklin, S. Tsogkas, S. Dickinson,and A. Garg. Fast-Grasp’D: Dexterous Multi-finger Grasp Generation Through DifferentiableSimulation. In International Conference on Robotics and Automation (ICRA) , 2023.[21] K. Sohn, H. Lee, and X. Yan. Learning Structured Output Representation using Deep Condi-tional Generative Models. In Advances in Neural Information Processing Systems (NeurIPS) ,2015.[22] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep Hierarchical Feature Learn-ing on Point Sets in a Metric Space. In Advances in Neural Information Processing Systems(NeurIPS) , 2017.[23] K. Li, N. Baron, X. Zhang, and N. Rojas. EfficientGrasp: A Unified Data-Efficient Learningto Grasp Method for Multi-fingered Robot Hands. In Robotics and Automation Letters , 2022.[24] C. Pan, B. Okorn, H. Zhang, B. Eisner, and D. Held. TAX-Pose: Task-Specific Cross-PoseEstimation for Robot Manipulation. In Conference on Robot Learning (CoRL) , 2022.[25] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The Graph NeuralNetwork Model. In Transactions on Neural Networks , 2009.[26] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun. Graph NeuralNetworks: A Review of Methods and Applications. In AI Open , 2020.[27] I. Huang, Y . Narang, R. Bajcsy, F. Ramos, T. Hermans, and D. Fox. DefGraspNets: GraspPlanning on 3D Fields with Graph Neural Nets. In International Conference on Robotics andAutomation (ICRA) , 2023.[28] A. Garcia-Garcia, B. S. Zapata-Impata, S. Orts-Escolano, P. Gil, and J. Garcia-Rodriguez. Tac-tileGCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors.InInternational Joint Conference on Neural Networks (IJCNN) , 2019.[29] X. Lou, Y . Yang, and C. Choi. Learning Object Relations with Graph Neural Networks forTarget-Driven Grasping in Dense Clutter. In International Conference on Robotics and Au-tomation (ICRA) , 2022.[30] T. N. Kipf and M. Welling. Semi-Supervised Classification with Graph Convolutional Net-works. In International Conference on Learning Representations (ICLR) , 2017.[31] J. Varley, C. DeChant, A. Richardson, J. Ruales, and P. Allen. Shape Completion EnabledRobotic Grasping. In International Conference on Intelligent Robots and Systems (IROS) ,2017.[32] J. Lundell, F. Verdoja, and V . Kyrki. Robust Grasp Planning Over Uncertain Shape Comple-tions. In International Conference on Intelligent Robots and Systems (IROS) , 2019.[33] X. Yan, J. Hsu, M. Khansari, Y . Bai, A. Pathak, A. Gupta, J. Davidson, and H. Lee. Learning6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations. In InternationalConference on Robotics and Automation (ICRA) , 2018.10[34] M. Van der Merwe, Q. Lu, B. Sundaralingam, M. Matak, and T. Hermans. Learning Contin-uous 3D Reconstructions for Geometrically Aware Grasping. In International Conference onRobotics and Automation (ICRA) , 2020.[35] J. Bohg and D. Kragic. Learning Grasping Points with Shape Context. In Robotics and Au-tonomous Systems , 2010.[36] B. Calli, A. Singh, J. Bruce, A. Walsman, K. Konolige, S. Srinivasa, P. Abbeel, and A. M.Dollar. Yale-CMU-Berkeley Dataset for Robotic Manipulation Research. In The InternationalJournal of Robotics Research , 2017.[37] S. Brahmbhatt, C. Ham, C. C. Kemp, and J. Hays. ContactDB: Analyzing and Predicting GraspContact via Thermal Imaging. In Conference on Computer Vision and Pattern Recognition(CVPR) , 2019.[38] Y . Zhou, C. Barnes, J. Lu, J. Yang, and H. Li. On the Continuity of Rotation Representations inNeural Networks. In Conference on Computer Vision and Pattern Recognition (CVPR) , 2019.[39] J. J. Mor ́e and D. C. Sorensen. Computing a Trust Region Step. In SIAM Journal on Scientificand Statistical Computing , 1983.[40] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, and G. State. Isaac Gym: High Performance GPU Based Physics Sim-ulation For Robot Learning. In Advances in Neural Information Processing Systems Datasetsand Benchmarks Track (NeurIPS D&B) , 2021.[41] A. T. Miller and P. K. Allen. GraspIt! A Versatile Simulator for Robotic Grasping. In Robotics& Automation Magazine , 2004.11A Method DetailsA.1 Autoregressive Matching Module DetailsThe downprojected geometric embeddings are passed to an autoregressive matching module, Mi,consisting of 5 layers, each responsible for predicting the index of the object vertex cnwhere key-pointknmakes contact, given outputs of keypoints k0, ...,kn1. Each layer nconcatenates theembedding of the n-th keypoint of the end-effector along with the object embedding. Then, it cal-culates the relative distance map of each object vertex to each of the n1object vertices where thepreviousn1keypoints make contact. Note that is done via teacher forcing: instead of using thepredictions of each n1layer, we use the previous n1ground truth contact points. This avoidserror propagation during training. The relative distance maps are stacked and concatenated with theobject andn-th keypoint embeddings. This constitutes the input to an MLP that predicts a binaryclassification prediction over the object vertices that indicates the predicted n-th contact point cn.A.2 Keypoint Sampling at Inference TimeAs noted above, the independent unnormalized distribution for k= 0 is leveraged to sample key-point 0 which will commence the autoregressive inference. We use the 0-th dimension as a scoringmechanism for sampling high likelihood points where keypoint 0 makes contact. This is then passedinto the model as the previous contact of keypoint 1. At inference time, at the n-th step, teacher forc-ing is substituted with passing in the (n1)predicted contact vertices. Finally, the end result isa tensor of 6 coordinates of the object graph. As previously mentioned, grasping is a multi-modaldistribution and our model should be able to sample from the various modes. In our method, thiscan be achieved straightforwardly by sampling a variety of starting top-K points for keypoint 0. Theintuition behind this is that diverse, yet likely starting points for keypoint 0 will condition subse-quent predicted points differently, and ultimately yield different grasp modes. For our experiments,we sampled 4 such top-K points, namely top-0, 20, 50, 100, in order to explore the capacity of ourmethod to generate diverse grasps. A more sophisticated sampling algorithm such as Beam search,could be applied here, however we empirically achieved sufficient diversity through multimodalsampling of keypoint 0.B Additional Experimental DetailsB.1 Robustness ExperimentsTo investigate the robustness of the learned representations, we conducted a set of experiments wherewe tested out our trained model with noisy object point clouds, partial point clouds and partial pointclouds with noise added as well:Noisy point clouds. For this experiment, we processed the 10 object point clouds of our evaluationset by adding Gaussian noise with standard deviation 0.001 and clipping to a one standard deviationinterval, to each of the points. Our evaluation process was then repeated with these as input zero-shot, i.e. grasps were generated and evaluated with the same process.Partial point clouds. For this experiment, we emulated a table top scenario where objects placedon a table would be missing the bottom of their surface. To achieve this, for each object point cloud,we defined a z-planezthres=zmaxzmin6;wherezmin; zmax are the minimum and maximum z-value found in each object point cloud respec-tively. We then remove all points with z<z thresin order to emulate such a table effect. The resultingpoint clouds are again used zero-shot on our model to predict grasps.Noisy partial point clouds. For this experiments, the table top emulating partial point clouds gen-erated for the previous experiment are augmented with Gaussian noise of standard deviation 0.001and clipping to a one standard deviation interval. Grasping generation occurs again zero-shot onour model, and the evaluation process remains the same as all other experiments.Comparative results for all 3 experiments against noiseless inputs can be viewed in Tab. 3.12AugmentationSuccess (%)" Diversity (rad)"ezgripper barrett shadowhand ezgripper barrett shadowhandnoiseless 72.5 90.0 75.0 0.188 0.249 0.205noisy 75 95.0 62.5 0.183 0.245 0.196partial 67.5 67.5 65.0 0.181 0.207 0.197noisy partial 65 75.0 62.5 0.143 0.227 0.212Table 3: Comparisons between noiseless, noisy, partial, and noisy partial object point cloud inputs.We observe that our model generally demonstrates robustness to noise with performance actuallyincreasing in two out of three evaluated end-effectors. Partial point clouds cause the performance todrop as expected, however the model is still performing at a good level at multi-embodiments.B.2 PointNet++ AblationOur choice of GCN as a geometry encoder is, of course, not the single architectural option avail-able for representing 3D geometry features, with PointNet++ [22] being a popular choice in theliterature. In this ablation, we investigate the efficacy of GCN in the multi-embodiment graspingsetup compared to PointNet++ by replacing both our GCN object and end-effector encoders with aPointNet++ architecture.EncoderSuccess (%)" Diversity (rad)"ezgripper barrett shadowhand ezgripper barrett shadowhandGCN [30] 75.0 90.0 72.5 0.188 0.249 0.205PointNet++ [22] 75.0 70.0 65.0 0.154 0.223 0.151Table 4: Comparison between GCN and PointNet++ encoder choices.Results in Tab. 4 show that the GCN encoder variant outperforms the PointNet++ one for the 3-finger and 5-finger gripper while performs on par with it for the 2-finger gripper. The GCN variantis also showing higher diversity of grasps for all 3 end-effectors.B.3 Non-Shared Weights AblationFor our main method, we assumed shared weights between the representations used in the autore-gressive modules predicting each keypoint contact. However, it is of interest to investigate howperformance gets impacted if each autoregressive module is free to influence geometry representa-tions for the keypoint it is responsible for. We thus, disentangled encoding weights for each of theautoregressive modules by passsing in a separate end-effector encoder in each.AblationSuccess (%)" Diversity (rad)"ezgripper barrett shadowhand ezgripper barrett shadowhandShared weights 75.0 90.0 72.5 0.188 0.249 0.205Non-shared weights 70.0 82.5 60.0 0.165 0.259 0.163Table 5: Comparison between shared and non-shared weights of the end-effector encoder for au-toregressive learning.The comparison is provided in Tab. 5 and indicate that training end-to-end with a shared end-effectorencoder for all keypoint predictions, is still a significantly better performant choice. The sharedweights variant performs 5%-12.5% better among the 3 sample embodiments than the non-sharedweights ablation.We used the implementation from https://github.com/yanx27/Pointnet_Pointnet2_pytorch13B.4 Comparison to GraspIt!Additionally to baselines presented in the main paper, we include a comparison of our method tothe simulated annealing planner in GraspIt! [41] for the Barrett end-effector. We produced 4 graspswith the highest quality as per GraspIt!’s quality metric, and further converted them to the formatexpected by the IsaacGym evaluation protocol we used for our method. Results can be found in Tab.6 below.MethodSuccess (%)"Diversity (rad)"barrett barrettGraspIt! [41] 89.99 0.00347GeoMatch (ours) 90.0 0.24900Table 6: Comparison of GeoMatch and GraspIt!’s simulated annealing planner for barrett.B.5 Impact of the number of embodiments included at trainingIn order to better understand the impact of the number of embodiments seen by the unified grasp-ing policy and investigate potential diminishing returns, we trained variants of the policy with anincreasing number of end-effectors and compared all on Shadowhand. The results presented inTab. 7 suggest that indeed training on multi-embodiment data and adding more end-effectors duringtraining increase performance significantly.Training set containsSuccess (%)"ezgripper barrett robotiq allegro shadowhandX 40.0X X 47.5X X X 55.0X X X X 55.0X X X X X 72.5Table 7: Comparison between an increasing number of end-effectors seen at training time. Evalua-tion is performed on an in-distribution end-effector (Shadowhand) but on our unseen set of objects.C Implementation DetailsImplementation of all experiments was done using an Adam optimizer with learning rate of 1e-4 for200 epochs. An assortment of GPU was used, namely RTX3090, V100, T4. Other hyperparametersused were provided in the main paper but for completeness, we include all hyperparameters here.The GNN used had 3 hidden layers of size 256. The output feature size of the GNN encoder was512. The two parts of the loss were weighed by 0.5 each while the two positive weights used forthe two BCE losses were 500 and 200 for the independent distributions and marginals respectively.The dataset used was the subset of MultiDex used by [12] to train the CMap-CV AE model of theirapproach, which contains 50,802 diverse grasping poses for 5 hands and 58 objects from YCB andContactDB. The training set contained 38 objects and the validation set the remaining 10. Theprojection layer was a Linear layer without bias with an output dimension of 64 and each of theMLP autoregressive modules had 3 hidden layers of size 256.For the IK, SciPy’s TRF algorithm was used where each resulting set of predicted keypoints wasmoved 5mm away from the surface of the object on the direction of the normal in order to form apre-grasp pose. The initial pose guess provided, was a heuristic calculated by orienting the palm ofthe gripper to align with the negative of the normal on the object surface at the closest surface point.For evaluation, 4 grasps per object-gripper pair were sampled by selected the top-[0, 20, 50, 100]most likely keypoint 0.14The Isaac Gym based evaluation scripts from [12] were used as is, aside from the one Adam step offorce closure where the step size used was 0.05 in order to make the force closure smoother and lessabrupt.15 |
WmF-fagWdD | SCALE: Causal Learning and Discoveryof Robot Manipulation Skills using SimulationTabitha Edith Lee∗†Shivam Vats∗Siddharth Girdhar Oliver KroemerThe Robotics InstituteCarnegie Mellon University{tabithalee, svats, sgirdhar, okroemer }@cmu.eduAbstract: We propose SCALE, an approach for discovering and learning a di-verse set of interpretable robot skills from a limited dataset. Rather than learninga single skill which may fail to capture all the modes in the data, we first iden-tify the different modes via causal reasoning and learn a separate skill for each ofthem. Our main insight is to associate each mode with a unique set of causallyrelevant context variables that are discovered by performing causal interventionsin simulation. This enables data partitioning based on the causal processes thatgenerated the data, and then compressed skills that ignore the irrelevant variablescan be trained. We model each robot skill as a Regional Compressed Option,which extends the options framework by associating a causal process and its rele-vant variables with the option. Modeled as the skill Data Generating Region, eachcausal process is local in nature and hence valid over only a subset of the contextspace. We demonstrate our approach for two representative manipulation tasks:block stacking and peg-in-hole insertion under uncertainty. Our experiments showthat our approach yields diverse skills that are compact, robust to domain shifts,and suitable for sim-to-real transfer.Keywords: skill discovery, causal learning, manipulation1 IntroductionWe want robots to help and work alongside humans in their homes, kitchens, and restaurants. How-ever, outside of structured environments, robots currently struggle at reliably performing even someof the basic manipulation tasks that humans can do with ease. Why are humans so much betterdespite the vast diversity of objects and their complex interactions that they potentially need to rea-son about? First, humans usually know multiple ways to solve a task to be robust to failures andvariations in the environment. For example, if a tight jar doesn’t open with our bare hands, we mayuse a piece of cloth to improve our grip. Second, humans excel at selectively attending [1] to onlya small part of the environment that is relevant to the task. Selective attention significantly reducesthe computational complexity of reasoning and allows us to handle complicated situations.Prior works in manipulation skill learning have leveraged these two observations separately. Mostmethods [2, 3, 4, 5, 6] learn skills by associating each skill with a sub-goal, where, the sub-goals arehand-designed or learned from demonstrations. Once the sub-goals have been assigned, feature se-lection [7, 8] and abstraction selection [9, 10] can be used to reduce the complexity of skill learning.However, such approaches are quite sensitive to the sub-goals and struggle to distinguish betweendifferent strategies to achieve the same goal. Our main insight is to associate a skill with not just asub-goal, but also with the variables that are causally relevant to it . For example, opening a jar withour bare hands is a skill distinct from opening it with the help of a piece of cloth. Only hand and∗Equal contribution.†Corresponding author.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.jarare relevant to the former, while the latter also relies on the properties of cloth . Hence, these twostrategies should be represented as two distinct skills even though they achieve the same goal.Based on this principle, a manipulation task involving nvariables can have up to 2nskills, based onvariable subsets being causally relevant. This is a very large space to search for skills and not all sub-sets may correspond to a useful skill. Hence, we propose SCALE ( Skills from CAusalLEarning), anefficient approach for robot skill learning through causal feature selection in simulation.1Instead ofna ̈ıvely generating data, in our approach, the robot interacts with the simulator by conducting causalinterventions. This elicits the causal features for completing a task under different settings, yieldinga diverse and compact library of skills. Our approach learns skills that are described by physicallymeaningful properties without spurious variables that would be related to irrelevant objects.Our contributions of this work are two-fold. First, we introduce SCALE, an algorithm for learning arobot skill library from causal interventions in simulation. Second, we conduct a variety of experi-ments that demonstrate that SCALE outperforms baseline approaches for two manipulation domainsof block stacking and sensorless peg insertion. As a part of these experiments, we also demonstratesim-to-real transfer of the skills learned by SCALE for block stacking.2 Related WorkRobot skill learning. Building robots that can solve a wide variety of complex tasks is one of thefundamental problems in robotics. A popular approach is to learn skills parameterized by the taskparameters as these can generalize over related tasks. Prior works [11, 12] show such parameter-ized skills lie on a low dimensional piecewise-smooth manifold in the context space and identifythis structure using ISOMAP [13]. For higher-dimensional problems, it becomes infeasible to learndirectly in the full context space. One approach is to learn a library of simple parameterized skillswhich can be composed to solve more complex tasks [14, 15, 16, 17]. Recent works [18, 19] pro-pose a differentiable attention mechanism to learn context-specific attention, but these have beenevaluated only in relatively small domains. Popular methods for unsupervised skill discovery in-clude graph-based methods [20, 21] that seek to build a graph of skills to cover the task space andinformation-theoretic methods [22, 23, 6] that seek to maximize the diversity of skills.Causality in robotics and reinforcement learning. Causality is the science of cause and ef-fect [24, 25, 26]. Although the advantages of causal inference and discovery within the biomedicalsciences, economics, and genomics have been well-established [27, 28], the integration of causalitywithin machine learning is nascent [29, 30]. In robotics, causality-based approaches are partic-ularly under-explored despite the potential advantages of greater reasoning and learning capabil-ities [31], particularly through structure and transfer learning [32]. Most similar to our work isthat of CREST [33], an algorithm for identifying features for a robot policy through causal inter-ventions. Our algorithm SCALE leverages the work of Lee et al. on CREST for determining thecausally relevant variables for each robot skill. Causality has also empowered learning the structureof physical systems from videos [34] and explanations for robot failures [35]. Within reinforcementlearning more broadly, causality plays a central role for improving performance through greaterstructure [36], learning latent factors in dynamics via causal curiosity [37], learning invariant poli-cies [38], and learning a dynamics model that can yield state abstractions [39].Intuitive physics. Please see App. B for a discussion of how SCALE relates to intuitive physics.3 PreliminariesThe robot learns a set of skills K={K1, . . . ,KK}, where each skill solves a distribution of manip-ulation tasks. Each task is modeled as a manipulation MDP [40], M:= (S,A,R, T, γ, τ ), wheres∈ S is the state space, a∈ A is the action space, Ris the reward function, Tis the transition1Website: https://sites.google.com/view/scale-causal-learn-robot-skill2function, γis the discount factor, and τis additional task information. Tasks are solved if the finalreward Rf> R S, where RSis a solved threshold.Options. Each skill Kis a parameterized option [41, 10]. An option O:= (π,I, β)is defined usingthree components: (1) the option control policy π(a|s); (2) an initiation set I={Rf> R S|s}thatspecifies where the final reward Rfsolves the task when taking option Oin state susing optionpolicy π; and (3) termination condition β(s)that specifies when the option concludes. For thetermination condition of this work, our skills execute open-loop with fixed duration.Context. We define the context c∈ C as a set of variables Cthat fully specify the manipulationtask. The context space C:=S ×τgeneralizes the state space to include geometric and othertime-invariant task properties defined by τ. Each skill requires only a subset of the full context,determined via causal feature selection (c.f., Sec. 5.1).Contextual policies. We use a hierarchical approach [42] to decompose the option control policyinto an upper-level policy πu(θ|c)and a lower-level policy πl(a|s, θ). Given a context c∈ C,πu:c7→θspecifies the parameters θfor the lower-level policy πl. For example, πlcould be aCartesian-space impedance controller, where θspecifies the sequence of waypoints to be followedby the robot end-effector. For our work, we assume the lower-level controller πlis given, and welearn the upper-level policy πuk. The lower-level controller is shared across the different skills.Compressed Context and Feature Selection. SCALE learns compressed skills that only usecausally relevant context variables, as many will be unimportant. For our work, we disregard di-mensions of the context space that are not chosen by causal feature selection (c.f., Sec. 5.1), leadingto a compressed context space ˆcthat is obtained by selecting dimensions of the full context spacethat correspond to the relevant variables of interest.Causal Reasoning in Simulation. SCALE leverages a simulator with the key capability of inter-acting with scenes through context interventions, which enables the causal learning in SCALE. Forthis reason, we formalize the simulator as a causal reasoning engine W:= (CS, T), where CSisthe scene structural causal model (SCM) and Tis the transition model. App. C provides greaterdiscussion of this formalization. This formalism addresses SCALE’s assumption that the simulatoris capable of answering questions to scene interventions, i.e., constructing new scenes with a changeto one variable to assess if there is a change (c.f., Sec. 5.1). These variables are required to be in-tervenable within the simulator, but not all variables need to be intervenable. For instance, gravityis a simulation variable, but for this work, it is not considered as a candidate for causal reasoning;therefore, it does not need to be intervenable.4 Skill Formulation4.1 Regional Compressed OptionIn our work, we formalize each robot skill Kas a Regional Compressed Option (RCO), whereK:= (πk,Pre, β,D)andπkis the option control policy , Pre is the precondition ,βis the terminationcondition , andDis the data generating region (DGR). In this model, the policy πk(a|ˆcAk)usescompressed context ˆcAk, which is obtained by selecting dimensions of the context space accordingto the relevant variable set Ak⊆C(c.f., Sec. 5.1). The learned, upper-level policy is πuk(θ|ˆcAk).The precondition Pre (c) =P(Rf> R S|c)is a probabilistic initiation set [43].4.2 Data Generating RegionOur goal is to learn an upper-level policy πu:c→θ, i.e., a mapping that generates the correctparameters θfor solving the task from any initial context c. We refer to this unknown mapping asadata generating process orcausal process . Instead of trying to learn this data generating processdirectly, which may be difficult when many variables are involved, our main insight is to model it asa mixture of multiple causal processes. Each such process is likely to have a smaller set of relevantvariables and thus would be easier to learn. For example, consider the task of opening jars, where3Figure 1: The figure shows an overview of the proposed framework applied to a block stacking task. The robotis given a context space, control policy, task simulator, and task reward. The robot samples a set of contextsto create task instances, which it subsequently solves for that instance. The robot then applies interventionson the contexts to identify skill-relevant parameters. Contexts with the same set of policy-relevant parameterscome from the same causal model and are hence combined to form data generation regions. Here, we havetwo causal models: C1with relevant variables from the yellow, blue, and red blocks and DGR D1; andC2withrelevant variables from the yellow and blue blocks and DGR D2. Each region is then used to learn a separateskill policy with the corresponding set of policy-relevant parameters. For each skill, we finally learn a set ofpreconditions within the context space to determine where the skill can ultimately be applied. The pairs ofpolicies and preconditions are then combined to create a skill library for completing the given task.the jar could be tight ornot tight . We can model the data generating process for this task as acombination of two simpler causal processes: C1which uses only your hand, and C2which uses apiece of cloth along with your hand. However, these causal processes don’t hold for all jar openingtasks. C1holds when the jar is not tight , while C2holds when the jar is tight. Thus, every causalprocess is valid only in a subset of the context space. We refer to this subspace D⊆ C as the datagenerating region (DGR) of the causal process. Here, D1:={not tight }andD2:={tight}. Therobot learns a separate skill for every such causal process. Furthermore, each skill should be trainedby only using data from inside its DGR; data lying outside the DGR are generated by a differentcausal process and is hence out-of-distribution. The DGR uses compressed context ˆcDkobtainedfrom relevant variable set Dk⊆C(c.f., Sec. 5.1).5 Skill Discovery through Causal Reasoning in SimulationThe SCALE algorithm (Fig. 1) comprises two steps: 1) skill dataset generation and 2) skill training.These steps are described in Sec. 5.1 and 5.2, respectively. Algorithm descriptions are in App. E.5.1 Batch Data GenerationFirst, the robot interacts with the simulator Wto collect skill training data. This is done by collectinga batch dataset DB. The robot samples nrandom scenes represented by ciand attempts to determinethe lower-level controller parameters θithat solve the specific task. In practice, we use RelativeEntropy Policy Search (REPS) [44], but any suitable planner, trajectory optimizer, or reinforcementlearning algorithm would suffice. Unsolved tasks are disregarded and not collected in DB.Causal feature selection. For successfully solved scenes, the relevant variables for the policy Aiand DGR Diare selected using the CREST algorithm [33]. CREST conducts feature selectionthrough causal interventions. Intuitively, a variable is causal if, for all other variables held equal,interventions upon this variable induce a change in the final obtained reward Rf. A spurious variablehas no effect on the reward and thus can safely be ignored. To summarize CREST, the process beginsby solving a scene, which we refer to as the non-intervened scene. For each context variable, a newvalue is randomly sampled from a distribution (in the CREST work, this distribution is the context4variable’s possible values). A scene is constructed with that intervened value, with all other contextvariables holding the same, non-intervened value. Then, the robot executes the solution to the non-intervened scene in this intervened scene to obtain an intervened reward. This process repeats a givennumber of times, and a statistical test is assessed to determine how often the intervened rewardsdiffer from the non-intervened reward. If the intervened rewards are frequently no different than thenon-intervened reward, the context variable is considered spurious (and causally relevant otherwise).In this work, CREST performs interventions Iover a local (e.g., 10%) fraction of context spaceCto yield Ai. Similarly, Diis obtained through interventions over the entire space C. Finally,the batch dataset is appended by the dataset point (ci, θi,Ai,Di). Note that CREST is not a strictrequirement of SCALE. In principle, SCALE requires only a determination of which variables arecausally relevant, which CREST provides. Other approaches, such as using causal discovery, arealso possible. An important consideration of choice of approach is whether the context space isdisentangled. In our work, we assume a disentangled context space, and so the variable-by-variableintervention process of CREST (which also assumes disentangled variables) will suffice. If thecontext space is entangled, then causal disentanglement approaches could first be used.Splitting batch data into skill data. After dataset collection, batch dataset DBis split into differentskill datasets according to the relevent variable sets. In this work, we assign highly occurring batchdata that contain the same relevant variables Ainto the same skill dataset Dk, while also taking theunion over all associated D. This assumption may not always hold, but is sufficient for the tasks weexamine in this work. More sophisticated ways of splitting the batch dataset is left for future work.5.2 Skill TrainingThe second phase of SCALE trains each skill KkusingDk. Each skill has relevant variable sets AkandDkwith task solution datapoints (c, θ).DGR. The DGR Dis first trained on ˆcDkusing Dk. For this work, we use a one-class SVM to modelthe DGR, but in principle, any one-class classification algorithm would suffice.Policy. The policy is trained next. The skill dataset contexts are filtered through the DGR Dtoobtain inliers c+for policy training data. This ensures policy training data are consistent with theunderlying causal process. Then, the policy πuk(θ|ˆcAk)is trained using ˆc+Ak(using Ak) and thecorresponding parameters θ+. For our work, policies are learned using regression, but reinforcementlearning could also be used [33]. With πuklearned, the final skill policy πk(a|ˆcAk)is determined.Preconditions. The preconditions Pre are learned last through policy evaluation. Using the simula-torW, contexts care re-sampled and evaluated with policy πkto obtain rewards Rf. This evaluationdata(c, Rf)is used to train a precondition classifier to obtain Pre. For our work, we use a nonlinearSVM classifier that has probability estimates.6 Experimental ResultsWe conduct skill learning experiments with SCALE for block stacking and peg-in-hole insertiontasks with the Franka Emika Panda robot (Fig. 2). Both tasks are emblematic of high-precision con-trol that is desirable in many industrial applications [45]. We conduct our experiments in NVIDIAIsaacGym [46, 47], a high-fidelity physics simulator that also serves as our causal reasoning engineW. We use a custom library that implements the scene SCM CSto facilitate scene creation andinterventions. The forward simulation of physics provides the transition model T.Baselines. We compare SCALE to baseline approaches with monolithic policies (without any skills)for either the full-dimensional context space (“monopolicy”) or a reduced context space obtained byusing the most commonly occurring CREST result (“crest-monopolicy”). The CREST monopolicyrepresents na ̈ıvely using CREST, ignoring that CREST provides locally different results within theunderlying data (a property that SCALE leverages).5(a) (b) (c) (d) (e)Figure 2: SCALE discovers skills for the Franka Emika Panda robot using causal learning in simulation for twomanipulation tasks: (a) block stacking and (b) peg-in-hole insertion. In addition to skill learning experiments,we also show how SCALE can yield skills (c) for sim-to-real transfer (App. I); (d) for generalization in down-stream tasks, such as stacking a block tower (App. J); and (e) for robustness to task domain shifts (App. L).Table 1: SkillsKblocks that were discovered for the block stacking task. AandDare the variables used forthe skill’s policy and DGR, respectively. Data is the quantity of data used for each skill (from a batch datasetof 585 samples, 340 samples were used to train skills). Tsk. Sv. %, shown for both scale-lin and scale-nonlin,is the rate of task solves over the entire context space using only that skill.Skill Data Tsk. Sv. %, Lin Tsk. Sv. %, Nonlin.K1A:{xw1, yw1, xw2, yw2}D:{xw1, yw1, xw2, yw2, h2}53 (9.06%) 65.36% (200) 18.36% (56)K2A:{xw1, yw1, xw2, yw2, h2}D:{xw1, yw1, h1, xw2, yw2, h2}272 (46.50%) 78.76% (241) 55.88% (171)K3A:{xw1, yw1, ψ1, xw2, yw2, h2}D:{xw1, yw1, ψ1, xw2, yw2, h2}15 (2.56%) 34.31% (105) 1.31% (4)6.1 Block StackingTask representation. In the block stacking task, the robot starts with a source block ( B1) grasped,and it learns to place it on top of a target block ( B2). To do this, the robot uses a controller πlthatdefines the trajectory for the robot end-effector to traverse via impedance control. This trajectory isparameterized by θb= [θ∆x, θ∆y, θ∆zuθ∆zd]T∈R4, which specify waypoints the robot followssequentially. Specifically, these parameters characterize a trajectory where the robot lifts the sourceblock vertically, moves horizontally, descends vertically, and releases the block.For this task, the context variables CBare{CB1, . . . , CBNB, hπ}, which is the union of contextvariables for each of NB= 5 blocks plus the table height hπupon which the blocks are placed.The context variables for each block bare{xwb, ywb, ψb, hb, Rb, Gb, Bb}, yielding a 36-dimensionalcontext space for this problem. Here, xwbandywbare the world x- and y-positions of the block, andthe block’s orientation is represented by a rotation angle ψbaround the block’s vertical axis ( z). Thez-dimension (height) of the block is hi. Additional experimental details are available in App. H.Skill learning results: variable selection. From a batch dataset of 585 samples, SCALE found theskill library Kblocks ={K1,K2,K3}that is shown in Tab. 1 that were learned using 340 samplesof the dataset. These 340 samples were selected for being the most commonly occurring within thedataset, based on a heuristic threshold. Even though there are five blocks and 36 possible variables,the skills generally consisted of a much smaller subset of variables, relating to the geometry of thesource and target blocks. Note that K2’s relevant variables for the policy, AK2, are consistent withearlier work by Lee et al. [33] for this domain. This is generally considered to be the “ground truth”variable result for unobstructed block motion in this case. Skill K1could be seen as a version of K2when h2is not needed. Rarely, the source block’s rotation ψ1become important (e.g., the sourceblock’s final pose was not fully stable when stacked on the target block), and thus a skill emergeswith this variable ( K3). Variables for neither block color nor table height are observed as expected.6Table 2: Task evaluation results for using the skill library Kblocks for the block stacking task. Ctrl. is theapproach control (skills or one monolithic policy). Fn. Cl. is the approach’s function class. Linear approachesuse Bayesian ridge regression, whereas nonlinear methods consist of a multilayer perceptron with a 16x16x16architecture using ReLU activations. Task Solve % is the rate of task solves over the entire context space usingthe approach. Methods within ±2%(the stochasticity of the simulator) of the best approach are bold. |A|isthe quantity of input variables used for the approach’s policy. Data is the amount of training data used for theapproach. A ground truth policy is also shown, using all context variables and additional domain knowledge.Approach Ctrl. Fn. Cl. Task Solve % |A| Datascale-lin (ours) 3 skills Linear 90.49% (276) 4/5/6 340monopolicy-lin-all 1 policy Linear 85.95% (263) 36 585crest-monopolicy-lin-all 1 policy Linear 89.87% (275) 5 585scale-nonlin (ours) 3 skills Nonlinear 63.40% (194) 4/5/6 340monopolicy-nonlin-all 1 policy Nonlinear 10.13% (31) 36 585crest-monopolicy-nonlin-all 1 policy Nonlinear 60.78% (186) 5 585ground-truth-policy 1 policy Nonlinear 95.75% (293) * –Skill learning results: task evaluation. We evaluate the skill library Kblocks over the entire taskdistribution and show the results in Tab. 2. That is, for each context sample, the robot evaluateseach skill’s precondition and selects the skill with the highest probability of success. The suffix“-all” denotes that the entire batch dataset is used for the approach. For both function classes,SCALE yields an approach that outperforms full-dimensional policies and is generally comparableto CREST-reduced policies. However, the CREST-reduced policies only learn one approach tosolving the task, whereas SCALE learns three. The overall best performing approach was scale-lin (90.49%) with similar performance to the CREST baseline. Performance across all nonlinearapproaches was generally lower. App. H details the SCALE skill selection and further ablations.Sim-to-real experiment. We transfer the skills learned by SCALE and our baselines to a real FrankaPanda robot without any fine-tuning. As discussed in App. I, SCALE outperforms the baselines.6.2 Sensorless Peg-in-Hole InsertionOur second domain is peg-in-hole insertion under sensing uncertainty. It requires the robot to inserta cuboidal peg of cross-section 1 cm ×1 cm into a cuboidal hole of cross-section 1.3 cm ×1.3cm. The robot gets a noisy initial position of the hole with the noise sampled from a Gaussiandistribution N(0,0.32cm2). No further sensory observations are available. Due to this uncertainty,a na ̈ıve strategy of directly trying to push the peg down at the observed location of the hole achieves asuccess rate of only 34%. To address this, the robot should take uncertainty reducing [48] actions byinitiating contact with the environment (e.g., a fixture next to the hole). Our goal in this experimentis to learn such skills autonomously.Task representation. Each assembly task has 4 axis-aligned cuboidal fixtures (i.e., walls) of fixeddimensions around the hole. The 8-dimensional context variables CPare{x1, y1, . . . , x 4, y4}, con-taining the (x, y)coordinates of these fixtures with respect to the hole. The positions are different inevery task, but it is always possible for the robot to localize against any of the walls to complete thetask. We use a 6-parameter policy space: three (∆x,∆y,∆z)actions executed in sequence in therobot’s end-effector frame. In every policy, ∆z’s are designed to move the peg down while ∆xand∆yare parameters that are learnt using RL. Additional experimental details are available in App. K.Skill learning results: variable selection. Table 3 enumerates the skills Kpeg={K1, . . . ,K5}discovered by SCALE. Skills K2−5localize against one of the 4 walls. For each such skill, onlythe wall being used for localization is relevant to the skill and the other walls can be ignored. Con-sequently, the set of relevant variables for these skills contains only the distance to the wall that itlocalizes against. For the linear case, all of these skills except for K3have high success rate, whereassuccess rates are slightly lower for the nonlinear approach. Interestingly, SCALE also discovers askillK1that has an empty set of relevant variables. For more discussion of this skill, see App. K.7Table 3: SkillsKpegthat were discovered for the peg-in-hole insertion task. Columns are the same as in Tab. 1,except Data represents which 168 samples were used to train skills (from a batch dataset of 210 samples).SkillA D Data Task Solve %, Lin Task Solve %, Nonlin.K1{} { x1, y2, y3, x4}56 (26.67%) 64.84% (166) 61.72% (158)K2{x4} { x4} 25 (11.90%) 97.66% (250) 84.38% (216)K3{x1} { x1} 27 (12.86%) 44.53% (114) 84.77% (217)K4{y3} { y3} 28 (13.33%) 94.14% (241) 82.81% (212)K5{y2} { y2} 32 (15.24%) 98.44% (252) 79.69% (204)Table 4: Task evaluation results for using the skill library Kpegfor peg insertion (columns in Tab. 2).Approach Ctrl. Fn. Cl. Task Solve % |A| Datascale-lin (ours) 5 skills Linear 96.48% (247) 0/1/1/1/1 168monopolicy-lin-all 1 policy Linear 62.50% (160) 8 210crest-monopolicy-lin-all 1 policy Linear 62.89% (161) 1 210scale-nonlin (ours) 5 skills Nonlinear 88.67% (227) 0/1/1/1/1 168monopolicy-nonlin-all 1 policy Nonlinear 12.89% (33) 8 210crest-monopolicy-nonlin-all 1 policy Nonlinear 55.47% (142) 1 210Skill learning results: task evaluation. Table 4 presents the task evaluation of the skill libraryKpegfor 256 randomly sampled tasks. For both linear and nonlinear cases, SCALE outperformsboth baselines. The low success of monopolicy-nonlin-all is likely due to insufficient data owing toa larger network. The most common CREST result was variable x4(21.90%), so this was used forthe CREST baselines. However, it only localizes against one wall. The improvement of SCALE overthe CREST baselines implies SCALE skills benefit from the DGRs through greater quality trainingdata, whereas the CREST approaches use the entire dataset despite most samples having a differingCREST result than x4. For details of SCALE skill selection and further ablations, see App. K.Domain shift experiment. To evaluate the out-of-distribution generalization capabilities of SCALE,we evaluate the skills on a test distribution that is significantly harder than the training distribution.All approaches see a degradation in performance, but ours is more robust. See App. L for details.7 ConclusionWe present SCALE, an approach for discovery of compact, diverse robot manipulation skills fromcausal interventions in simulation. These skills arise from the skill DGR: a region that capturesthe underlying data generating process. We demonstrate the advantages of skill libraries discoveredwith SCALE for two simulation domains as well as on a real robot system.Limitations and future work. SCALE assumes the robot has access to a causal reasoning engine.We provide this via simulation and scene structural causal models, but these models could be learnedvia causal discovery. SCALE primarily learns from batch dataset collection; active learning of skillswould reveal useful behaviors that are statistically uncommon in the batch setting. Lastly, SCALEassumes that the context variables are defined, intervenable, and disentangled. For tasks and domainswhere these assumptions do not currently hold, future work in adjacent fields may ultimately providea path forward. Specifically, causal representation learning [29] — learning high-level intervenablevariables from low-level observations — could construct a state representation that SCALE can use,and, if a representation is available but entangled, causal disentanglement could be used.AcknowledgmentsWe gratefully acknowledge support from the National Science Foundation (Grant No. CMMI-1925130), U.S. Office of Naval Research (Grant No. N00014-18-1-2775), U.S. Army Research Lab-oratory (Grant No. W911NF-18-2-0218 as part of the A2I2 Program), and the NVIDIA NV AIL Pro-gram. We also gratefully thank our reviewers, whose helpful comments strengthened this work.8References[1] R. Desimone and J. Duncan. Neural Mechanisms of Selective Visual Attention. Annual Reviewof Neuroscience , 18(1):193–222, 1995.[2] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto. Robot Learning from Demonstrationby Constructing Skill Trees. The International Journal of Robotics Research , 31(3):360–375,2012.[3] Z. Su, O. Kroemer, G. E. Loeb, G. S. Sukhatme, and S. Schaal. Learning Manipulation Graphsfrom Demonstrations Using Multimodal Sensory Signals. IEEE International Conference onRobotics and Automation (ICRA) , 2018.[4] O. Kroemer, C. Daniel, G. Neumann, H. Van Hoof, and J. Peters. Towards Learning Hierar-chical Skills for Multi-Phase Manipulation Tasks. IEEE International Conference on Roboticsand Automation (ICRA) , 2015.[5] A. Chenu, N. Perrin-Gilbert, and O. Sigaud. Divide & Conquer Imitation Learning. IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , 2022.[6] J. Achterhold, M. Krimmel, and J. Stueckler. Learning Temporally Extended Skills in Contin-uous Domains as Symbolic Actions for Planning. arXiv preprint arXiv:2207.05018 , 2022.[7] C. Devin, P. Abbeel, T. Darrell, and S. Levine. Deep Object-Centric Representations for Gener-alizable Robot Learning. IEEE International Conference on Robotics and Automation (ICRA) ,2018.[8] J. Z. Kolter and A. Y . Ng. Regularization and Feature Selection in Least-Squares TemporalDifference Learning. International Conference on Machine Learning (ICML) , 2009.[9] N. Jiang, A. Kulesza, and S. Singh. Abstraction Selection in Model-Based ReinforcementLearning. International Conference on Machine Learning (ICML) , 2015.[10] G. Konidaris and A. Barto. Efficient Skill Learning using Abstraction Selection. InternationalJoint Conference on AI (IJCAI) , 2009.[11] B. C. Da Silva, G. Konidaris, and A. G. Barto. Learning Parameterized Skills. InternationalConference on Machine Learning (ICML) , 2012.[12] B. C. Da Silva, G. Baldassarre, G. Konidaris, and A. Barto. Learning Parameterized MotorSkills on a Humanoid Robot. IEEE International Conference on Robotics and Automation(ICRA) , 2014.[13] J. B. Tenenbaum, V . De Silva, and J. C. Langford. A Global Geometric Framework for Non-linear Dimensionality Reduction. Science , 290(5500):2319–2323, 2000.[14] L. P. Kaelbling and T. Lozano-P ́erez. Learning Composable Models of Parameterized Skills.IEEE International Conference on Robotics and Automation (ICRA) , 2017.[15] Z. Wang, C. R. Garrett, L. P. Kaelbling, and T. Lozano-P ́erez. Learning Compositional Modelsof Robot Skills for Task and Motion Planning. The International Journal of Robotics Research ,40(6-7):866–894, 2021.[16] J. Peters, J. Kober, K. M ̈ulling, O. Kr ̈amer, and G. Neumann. Towards Robot Skill Learning:From Simple Skills to Table Tennis. Joint European Conference on Machine Learning andKnowledge Discovery in Databases , 2013.[17] R. Pahi ˇc, Z. Lon ˇcarevi ́c, A. Gams, and A. Ude. Robot Skill Learning in Latent Space of aDeep Autoencoder Neural Network. Robotics and Autonomous Systems , 135:103690, 2021.9[18] K. Khetarpal, M. Klissarov, M. Chevalier-Boisvert, P.-L. Bacon, and D. Precup. Options ofInterest: Temporal Abstraction with Interest Functions. Proceedings of the AAAI Conferenceon Artificial Intelligence , 34(4):4444–4451, 2020.[19] M. Abdulhai, D.-K. Kim, M. Riemer, M. Liu, G. Tesauro, and J. P. How. Context-Specific Rep-resentation Abstraction for Deep Option Learning. arXiv preprint arXiv:2109.09876 , 2021.[20] A. Bagaria and G. Konidaris. Option Discovery using Deep Skill Chaining. InternationalConference on Learning Representations (ICLR) , 2020.[21] A. Bagaria, J. K. Senthil, and G. Konidaris. Skill Discovery for Exploration and Planning usingDeep Skill Graphs. International Conference on Machine Learning (ICML) , 2021.[22] B. Eysenbach, A. Gupta, J. Ibarz, and S. Levine. Diversity is All You Need: Learning Skillswithout a Reward Function. International Conference on Learning Representations (ICLR) ,2018.[23] A. Sharma, S. S. Gu, S. Levine, V . Kumar, and K. Hausman. Dynamics-Aware UnsupervisedDiscovery of Skills. International Conference on Learning Representations (ICLR) , 2020.[24] P. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman. Causation, Prediction, and Search .MIT Press, 2000.[25] J. Pearl. Causality: Models, Reasoning, and Inference . Cambridge University Press, 2ndedition, 2009.[26] J. Peters, D. Janzing, and B. Sch ̈olkopf. Elements of Causal Inference: Foundations andLearning Algorithms . MIT Press, 2017.[27] G. W. Imbens and D. B. Rubin. Causal Inference in Statistics, Social, and Biomedical Sciences .Cambridge University Press, 2015.[28] C. Glymour, K. Zhang, and P. Spirtes. Review of Causal Discovery Methods based on Graph-ical Models. Frontiers in Genetics , 10:524, 2019.[29] B. Sch ̈olkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y . Bengio.Towards Causal Representation Learning. Proceedings of the IEEE , 109(5):612–634, 2021.[30] J. Kaddour, A. Lynch, Q. Liu, M. J. Kusner, and R. Silva. Causal Machine Learning: A Surveyand Open Problems. arXiv preprint arXiv:2206.15475 , 2022.[31] K. C. Stocking, A. Gopnik, and C. Tomlin. From Robot Learning to Robot Understanding:Leveraging Causal Graphical Models for Robotics. Conference on Robot Learning (CoRL) ,2022.[32] O. Ahmed, F. Tr ̈auble, A. Goyal, A. Neitz, Y . Bengio, B. Sch ̈olkopf, M. W ̈uthrich, andS. Bauer. CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and TransferLearning. arXiv preprint arXiv:2010.04296 , 2020.[33] T. E. Lee, J. Zhao, A. S. Sawhney, S. Girdhar, and O. Kroemer. Causal Reasoning in Simula-tion for Structure and Transfer Learning of Robot Manipulation Policies. IEEE InternationalConference on Robotics and Automation (ICRA) , 2021.[34] Y . Li, A. Torralba, A. Anandkumar, D. Fox, and A. Garg. Causal Discovery in Physical Sys-tems from Videos. Advances in Neural Information Processing Systems , 33:9180–9192, 2020.[35] M. Diehl and K. Ramirez-Amaro. Why Did I Fail? A Causal-Based Method to Find Explana-tions for Robot Failures. IEEE Robotics and Automation Letters , 7(4):8925–8932, 2022.10[36] N. R. Ke, A. Didolkar, S. Mittal, A. Goyal, G. Lajoie, S. Bauer, D. Rezende, Y . Bengio,M. Mozer, and C. Pal. Systematic Evaluation of Causal Discovery in Visual Model BasedReinforcement Learning. Proceedings of the NeurIPS 2021 Datasets and Benchmarks Track ,2021.[37] S. A. Sontakke, A. Mehrjou, L. Itti, and B. Sch ̈olkopf. Causal Curiosity: RL Agents Discov-ering Self-supervised Experiments for Causal Representation Learning. International Confer-ence on Machine Learning (ICML) , 2021.[38] A. Sonar, V . Pacelli, and A. Majumdar. Invariant Policy Optimization: Towards StrongerGeneralization in Reinforcement Learning. Learning for Dynamics and Control , 2021.[39] Z. Wang, X. Xiao, Z. Xu, Y . Zhu, and P. Stone. Causal Dynamics Learning for Task-Independent State Abstraction. International Conference on Machine Learning (ICML) , 2022.[40] O. Kroemer, S. Niekum, and G. Konidaris. A Review of Robot Learning for Manipulation:Challenges, Representations, and Algorithms. Journal of Machine Learning Research , 22:30–1, 2021.[41] R. S. Sutton, D. Precup, and S. Singh. Between MDPs and Semi-MDPs: A Framework forTemporal Abstraction in Reinforcement Learning. Artificial Intelligence , 112(1-2):181–211,1999.[42] M. P. Deisenroth, G. Neumann, J. Peters, et al. A Survey on Policy Search for Robotics.Foundations and Trends in Robotics , 2(1–2):1–142, 2013.[43] G. Konidaris, L. Kaelbling, and T. Lozano-Perez. Symbol Acquisition for Probabilistic High-Level Planning. International Joint Conference on AI (IJCAI) , 2015.[44] J. Peters, K. M ̈ulling, and Y . Altun. Relative Entropy Policy Search. AAAI Conference onArtificial Intelligence , 2010.[45] J. Luo, E. Solowjow, C. Wen, J. A. Ojea, A. M. Agogino, A. Tamar, and P. Abbeel. Reinforce-ment Learning on Variable Impedance Controller for High-Precision Robotic Assembly. IEEEInternational Conference on Robotics and Automation (ICRA) , 2019.[46] J. Liang, V . Makoviychuk, A. Handa, N. Chentanez, M. Macklin, and D. Fox. GPU-Accelerated Robotic Simulation for Distributed Reinforcement Learning. Conference on RobotLearning (CoRL) , 2018.[47] V . Makoviychuk, L. Wawrzyniak, Y . Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin,A. Allshire, A. Handa, et al. Isaac Gym: High Performance GPU-Based Physics SimulationFor Robot Learning. arXiv preprint arXiv:2108.10470 , 2021.[48] R. Brost. Automatic Grasp Planning in the Presence of Uncertainty. IEEE InternationalConference on Robotics and Automation (ICRA) , 1986.[49] J. R. Kubricht, K. J. Holyoak, and H. Lu. Intuitive Physics: Current Research and Controver-sies. Trends in Cognitive Sciences , 21:749–759, 2017.[50] P. W. Battaglia, J. B. Hamrick, and J. B. Tenenbaum. Simulation as an Engine of PhysicalScene Understanding. Proceedings of the National Academy of Sciences , 110(45):18327–18332, 2013.[51] D. Ha and J. Schmidhuber. World Models. arXiv preprint arXiv:1803.10122 , 2018.[52] D. Hafner, T. P. Lillicrap, M. Norouzi, and J. Ba. Mastering Atari with Discrete World Models.International Conference on Learning Representations (ICLR) , 2021.11[53] J. Wang, C. Hu, Y . Wang, and Y . Zhu. Dynamics Learning With Object-Centric InteractionNetworks for Robot Manipulation. IEEE Access , 9:68277–68288, 2021.[54] O. Kroemer and G. Sukhatme. Meta-level Priors for Learning Manipulation Skills with SparseFeatures. 2016 International Symposium on Experimental Robotics (ISER) , 2017.[55] S. Katz, A. Tal, and R. Basri. Direct Visibility of Point Sets. ACM SIGGRAPH , 2007.[56] M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al. A Density-based Algorithm for Discover-ing Clusters in Large Spatial Databases with Noise. Proceedings of the Second InternationalConference on Knowledge Discovery and Data Mining (KDD) , 96(34):226–231, 1996.[57] K. Zhang, M. Sharma, J. Liang, and O. Kroemer. A Modular Robotic Arm Control Stack forResearch: Franka-Interface and FrankaPy. arXiv preprint arXiv:2011.02398 , 2020.12A SCALE and Appendices OverviewFundamentally, SCALE is a causal learning algorithm for discovering compact, diverse skillsthrough interventions in simulation. Figure 3 provides an overview of the approach.Structure of appendices. These appendices are structured as follows. Appendix B describes howSCALE connects to related work in intuitive physics. Appendix C provides greater details into theformalization of the simulator and its role as a causal reasoning engine. Appendix E formalizes theSCALE algorithm using nomenclature introduced in App. D. A discussion of higher-dimensionalcontext spaces and SCALE is then provided in App. F. Next, App. G provides a toy experiment thatis designed to convey greater intuition and visualization of the mechanisms that underlie SCALE.Appendix H presents additional experimental details of the block stacking experiment presented inSec. 6.1. Following this, Apps. I and J provides two additional experiments in the block stacking do-main: a sim-to-real transfer experiment and a downstream task evaluation experiment, respectively.The next two appendices concern the peg-in-hole insertion domain. Appendix K details additionalexperimental details first presented in Sec. 6.2, and App. L presents an additional experiment thatshows the robustness of SCALE under a task domain shift. Lastly, Appendix M contains a primeron causality for readers who are new to this area of research.Figure 3: In SCALE, the robot discovers skills in simulation using causal learning. (a) The simula-tion is used to solve task instances and conduct interventions to determine causally relevant contextvariables. (b) Simulation data are used to train a library of skills, (c) which are suitable for sim-to-real transfer learning. (d) Each skill that is learned is parameterized by the relevant variables selectedin simulation. Here, red context variables are unnecessary for the skill policy and can be safely ig-nored. The boundary encircling the policy represents the skill DGR and precondition, which arealso learned.B Related Work for Intuitive PhysicsThis appendix describes the connections between SCALE and the intuitive physics literature. Intu-itive physics is the ability to approximately predict and model the physical world without explicitunderstanding of the underlying dynamics [49]. Literature in cognitive psychology has suggestedthat humans develop mental intuitive physics models to support fast prediction and understanding ofcomplex physical scenes which enables physical reasoning [50]. Computational learning of intuitivephysics have been successful, enabling reinforcement learning and planning applications owing tothe models ability for forward prediction [51, 52, 53]. In our work, our causal reasoning engine can13be viewed as an internal model that uses interventions to elicit the physical mechanisms by whichthe data arise.C Simulation as a Causal Reasoning Engine(a) (b)Figure 4: Illustrations of the scene structural causal model used in the simulator W. (a) Fromcontext space Cand robot interventions I, the scene SCM CSgenerates a context vector cthatrepresents a particular scene that defines objects and their properties. (b) In this block example, CSisdefined using scene variables Ψ:=C∪zband context variables C:={xb, hb, hπ}, where xbis blockx-position, hbis block height, hπis table height upon which the block rests, and zb:=12hb+hπisblock z-position. Normally, values of Care sampled from context space C, but the robot performsan intervention I={do(hb= 0.6)}to force the value of hbto be 0.6. As a result, the dependentvariable zbis determined as 0.7 using this intervened value. Lastly, the scene is constructed andrepresented as context vector c= [0.1,0.6,0.4]T.This appendix provides greater discussion of the simulator formalization used by SCALE. The sim-ulator model, W:= (CS, T), is formalized as follows:1. a scene structural causal model CS(Fig. 4) that, given context space Cand interventions I,instantiates a scene that can be represented as a context vector, c∈ C;2. the transition model Tthat captures the domain forward dynamics as the robot interactswith the world through θstarting from the scene initialized from CS.A structural causal model (SCM) [25, 26] can be represented as a directed acyclic graph that is drivenby exogenous variables (functional inputs of the graph) that produces the solution for all variableswithin the graph. These two components of the simulator capture the spatial structure inherent tothe scene itself ( CS), and the spatiotemporal structure of the robot interacting with the world ( T).The simulator model W, including the scene SCM and transition function, is provided for the robotto use. In principle, the scene SCM could be learned via causal representation learning [29], e.g., aworld models approach that admits causal interventions.The scene SCM CSis defined by structural equations with scene variables Ψ, where C⊆Ψ. In thegraph induced by CS, the scene variables are the nodes, and context variables Care the root nodesand exogenous variables (functional inputs) of the SCM. The value of the context variables is givenby interventions I={do(Ci=ci)}if specified, or otherwise sampled from the context space C.The robot only conducts interventions with respect to Cthat would yield a steady-state solution andare physically realizable, excluding physically invalid scenes (e.g., object penetration).The transition model Tis the same as typical simulators. The forward dynamics are simulatedthrough the initial state s0, obtained from the scene created by CS, andθ, the inputs to the low-levelcontroller πl. With these inputs, the system temporally evolves as usual until the end of the episode,where reward Rfis obtained and compared to a threshold RSto determine if the task was solved.D NomenclatureTable 5 summarizes the nomenclature used in this paper and, in particular, the SCALE algorithm(c.f., App. E). Note the use of italics and bold type to disambiguate certain symbols. For example,14Table 5: Table of nomenclature.Symbol MeaningX set of drandom variables,i.e.,X:={X1, . . . , X d}X space of X,i.e.,X:= [X1, . . . ,Xd]Tx vector instantiation of Xi.e.,x:= [x1∈X1⊆ X 1, . . . , x d∈Xd⊆ Xd]TK set of krobot skills,i.e.,K:={K1, . . . ,Kk}D dataset containingAX∈Rm×nsamples from set Awith size nandBY∈Rm×plabels from set Bwith size pXis a set of random variables, but Xrefers to a dataset matrix. The notation for a variable and itsinstantiation as a scalar may also be overloaded depending on the context.E SCALE AlgorithmAs explained in Sec. 5, the SCALE algorithm (Alg. 1) describes how the skills are learned throughbatch dataset collection and skill training. The procedure for batch dataset collection used bySCALE (S KILL TRAIN DATA) is described in Alg. 2.Note that the number of skills is not a hyperparameter of the SCALE algorithm. Rather, the skillquantity emerges from S PLIT INTOSKILL DATASETS from groups of highly-occurring CREST re-sults, where each group becomes the dataset for a particular skill.F SCALE and Higher-Dimensional Context SpacesThe SCALE algorithm scales linearly with the dimensionality of the context space, i.e., O(|C|), dueto the necessity of performing interventions on each context variable. In the experiments examinedin this work, the dimensionality of the context space was 36 and 8 for the block stacking and peginsertion domains, respectively. For other applications where the context space is very large, heuris-tics can be incorporated to first downselect the context space into a smaller candidate space that canbe provided to SCALE. Example heuristics could include a distance metric (objects closer to thegoal may be more likely to be relevant than those further away) or using other approaches such asmeta-level priors [54].G Block Stacking Intuitive ExampleTo provide greater intuition for SCALE and the causal skill learning problem, we present the Height-Height experiment (Fig. 5): a simple example in the block stacking domain that can be easily visu-alized.Task and policy description. TheHeight-Height experiment contains 3 blocks: 1) a source block;2) a target block; and 3) an obstructing block between the source and target block. As in Sec. 6.1,the task is to place the source block on top of the target block. The same controller is used as inSec. 6.1, which is parameterized by θ∈R4. Specifically, each parameter of the controller is definedas follows:1.θ∆x: the distance the source block is moved along the world coordinate frame’s +x-axisonce it is picked up.2.θ∆y: the distance the source block is moved along the world coordinate frame’s +y-axisonce it is picked up.15Algorithm 1: SCALE: S KILLS FROM CAUSAL LEARNINGInput: causal reasoning engine W, context space C, controller πl, reward solved threshold RS,number of samples n, skill policy function fπ, number of evaluations m, skill timestepTfInitialize: skillsK← ∅// Collect training data(D1, . . . ,Dk)←SKILL TRAIN DATA(W,C, πl, n)// Train skillsforj= 1tokdo(CX,θY,A,D)← D j// Train DGRDX←REDUCE DIMS(CX,D)D←TRAIN DGR (DX)// Train PolicyAX←REDUCE DIMS(CX,A)(AX+,θY+)←DGRI NLIERS (D,AX,DX,θY)πu←TRAIN POLICY (fπ,AX+,θY+)π←πlπu// Train Preconditions(CXe,RYe)←EVALUATE POLICY (W,C, π, m )Pre←TRAIN PRECONDITION (CXe,RYe, RS)// Set Termination Conditionsβ←Tf// Construct SkillK+←(π,Pre, β,D)endResult: learned skills K3.θ∆zu: the distance the source block is lifted (moved along the world coordinate frame’s+z-axis) during the pick-up motion.4.θ∆zd: the distance the source block descends (moved along the world coordinate frame’s−z-axis) during the set-down motion.The controller behaves as follows:1. Move robot end-effector to source block and grasp it.2. Lift up the source block according to θ∆zu.3. Move the source block in the x-yplane according policy parameters θ∆xandθ∆y.4. Set down the source block according to θ∆zd.5. Ungrasp the source block.The context space of this experiment is just 2 variables, htandho, facilitating 2-dimensional visual-izations. For greater clarity, we refer to block properties by whether they belong to the target block(t) or the obstructing block ( o), instead of their index (as in Sec. 6.1). For this experiment, onlylinear approaches are considered.Skill learning results. The SCALE results for the Height-Height experiment are shown in Tab. 6and Fig. 6. The dataset size for skill learning was 569 samples, from an original size of 581. Theremaining 12 samples consisted of CREST results that occurred rarely (2.07%), and thus they werenot used for skill learning. Additionally, Fig. 7 visualizes the policy parameters of the dataset.Two primary behaviors were learned: free motion (Kfree), and obstructed motion (Kobstr). Thesebehaviors emerge because of the causal relationships between context variables.16Algorithm 2: SKILL TRAIN DATAInput: causal reasoning engine W, context space C, controller πl, reward solved threshold RS,number of samples n, local region fraction f, minimum dataset size dInitialize: batch dataset DB← ∅// Collect training datafori= 1tondoc←SAMPLE VALID SCENE (W,C)(θ, Rf)←TRYTOSOLVE TASK(W, c, π l)TaskSolved ←Rf> R SifTaskSolved thenA←CREST (W, c, π l, θ, R f, fC)D←CREST (W, c, π l, θ, R f,C)DB+←(c, θ,A,D)endend// Separate into k skill datasets(D1, . . . ,Dk)←SPLIT INTOSKILL DATASETS (DB, d)Result: skill training data (D1, . . . ,Dk)Figure 5: TheHeight-Height experiment is an intuitive example for SCALE in the block stackingdomain. In this experiment, only two context variables can vary: the height ( z-dimension) of theobstructing block ( ho) and the height of the target block ( ht). All others variables (e.g., features ofthe source block) do not change throughout this experiment.When the obstructing block is shorter than the target block (i.e., ht> ho), then the obstructing blockheight can safely be ignored in the robot action (thus, ho⊈AforKfree). This is reflected by thevalues of θ∆zuandθ∆zdin Fig. 7. In the region corresponding to Kfree,θ∆zuvaries linearly withrespect to the target block height, but not with the obstructing block height. Thus, θ∆zdis generally0. The result is that the robot tends to lift the block to a value that depends on the target block height,and no set-down motion ( θ∆zd) is needed.However, when the obstructing block is taller than the target block (i.e., ht< ho), the obstructingblock’s geometry interferes with the robot’s motion, and the robot must take this into account when17taking action. Specifically, the robot must first lift the source block over the obstructing block. Afterit moves laterally, the robot must descend to set the source block down; dropping the block wouldtypically lead to inadequate reward to solve the task. Because both the heights of these blocks areneeded to perform this action, {ht, ho} ⊆AforKobstr. In Fig. 7, the effect of hoappears intheθ∆zuparameter values, where the variation in the Kobstr region arises because of needing to liftabove the obstructing block height, ho(and thus, this parameter no longer depends on ht). However,forθ∆zd,bothhtandhoare needed, as the distances the robot descends through θ∆zdarises fromthe difference between htandho. Thus, the gradient here shows components for both htandho.These two skills encode the two distinct data generating processes within this context space. Theseprocesses — the reason why the data are generated a certain way — fundamentally depend onwhether the obstructing block is shorter or taller than the target block. Whether a condition holdsfor a given context requires the value of both of the blocks heights, so both block heights are neededto define each skill’s data generating region (i.e., {ht, ho} ⊆D).Note that neither skill can robustly solve the entire task space (55.63% for Kfree and 57.50% forKobstr). However, when using the entire library KHH={Kfree,Kobstr}(Tab. 7), the success ratebecomes 100.00%, with each skill being selected at approximately 50% chance (49.38% for Kfree,and 50.62% for Kobstr). This is expected because the relationship ht> h oholds for half of thecontext space and Kfree should be used, whereas ht< ho(Kobstr) holds for the other half.Table 6: SkillsKHHthat were discovered for the Height-Height experiment. AandDare thevariables used for the skill’s policy and DGR, respectively. Data is the quantity of data used for eachskill (from a batch dataset of 581 samples, 569 samples were used to train skills). These samplesare used to train a linear policy (Bayesian ridge regression) using the features from variables in A.Task Solve % is the rate of task solves over the entire context space using only that skill.Skill A D Data Task Solve %Kfree {ht} { ht, ho}253 (43.55%) 55.63% (178)Kobstr {ht, ho} { ht, ho}316 (54.39%) 57.50% (184)Baseline comparisons. In addition to scale-lin, Tab. 7 shows comparisons against several baselines.The “monopolicy” baselines are monolithic policies (without skills). The “-sk” and “-all” suffixesdenote whether the monolithic policy uses the same data as the SCALE library (“-sk”, 569 samples)or the entire batch dataset (“-all”, 581 samples). Given the similar amount of data, it is unsurprisingthat monopolicy-lin-sk and monopolicy-lin-all are essentially the same up to the stochasticity of thesimulator ( ±2%). Note that, unlike in Sec. 6.1 and Sec. 6.2, CREST monopolicy baselines are notexamined in this experiment; they are functionally equivalent to the monopolicy approaches becausethe most common CREST result is {ht, ho}, which is the same as the entire context space used forthe monopolicy baselines.As shown in Tab. 7, the skill library obtained by SCALE vastly outperforms the baselines, provid-ing task evaluation performance similar to that of a ground truth policy. This outcome is possiblebecause SCALE learns underlying regions of similar causal structure within the data, whereas mono-lithic policies ignore such structure. As shown in Fig. 7c–7d, this domain is nonlinear, but can berepresented by two smaller linear regions ( ht> hoandht< ho). Learning to regress to both re-gions with a monolithic linear policy is not possible, but SCALE can solve this domain with separatelinear skills, one per region.Summary. Our approach for SCALE — learning skills that encode distinct causal processes —empowers the robot with a diversity of specialized behaviors to use, depending on the context.Generalization of the context space can be achieved then through the composition of these behaviors,rather than attempting to learn a monolithic skill or policy that can capture the entire variation. In thisexample, two skills each with a linear policy is sufficient for generalization with SCALE, whereas amonolithic approach would require a nonlinear policy.18(a) (b)(c) (d)Figure 6: SCALE results for the Height-Height experiment. Two skills were found: Kfree (freeblock motion), stylized in blue with rectangular markers, and Kobstr (obstructed block motion),stylized in orange with diamond markers. (a) Learned data generating regions. Each datapoint is aresult from CREST. Datapoints that are crossed out are considered outliers and not used for trainingthe policy for that skill. (b–c) Preconditions for Kfree andKobstr, respectively. The black line isthe decision boundary for the prediction of whether the task would or would not be solved with thatskill. Note that each skill’s DGR generally falls within the positive precondition boundary. Train-ing and test data for learning the preconditions are indicated by circle and thin diamond markers,respectively. Datapoints that result in a different prediction than observed are crossed out. (d) Taskevaluation when using the skill library {Kfree,Kobstr}to solve the task. The marker and colorof each datapoint indicate which skill was selected for completing the task based on the skill pre-conditions (i.e., the skill with the highest probability of success). Note that the separation betweenselecting Kfree andKobstr is consistent with each skills’ underlying precondition and DGR. Data-points that were not solved by the chosen skill are crossed out.H Additional Details for Block Stacking ExperimentThis appendix provides greater information for the block stacking experiment first presented inSec. 6.1.Context. Note that the block vertical position zwb∈Ψis not part of the context, as we only considercases where the scene can be initialized into a steady state condition. Thus, zwb:=12hb+hπ.Reward function. The reward function for the task is R=RB−αLL−αee−αdd, where RB= 10is a bonus term obtained when the block is successfully stacked, Lis the total end-effector path ofthe robot ( αL= 1),eis the L2 norm error between the source block at the time of release and thegoal ( αe= 1), and dis the distance the source block travels between the point it was ungraspedto its final position ( αd= 1). The task is considered solved if the final reward Rfexceeds solvedthreshold RS= 5.19(a) (b)(c) (d)Figure 7: Policy parameters for the Height-Height experiment (shown as interpolated across the569 dataset samples to better visualize the gradients). The units of the parameters are in meters. Theparameters θ∆x(a) and θ∆y(b) are generally constant as they are unaffected by the variation in con-text variables. The notable variations occur in θ∆zu(c) and θ∆zd(d). Specifically, the relationshipchanges whether the obstructing block is taller or shorter than the target block (above or below theht−ho= 0line, respectively).Table 7: Task evaluation results for using the skill library KHHfor the block stacking task. Ctrl. isthe approach control (skills or one monolithic policy). Fn. Cl. is the approach’s function class.Linear approaches use Bayesian ridge regression. Task Solve % is the rate of task solves over theentire context space using the approach. Methods within ±2%(the stochasticity of the simulator)of the best approach are bold. |A|is the quantity of input variables used for the approach’s policy.Data is the amount of training data used for the approach. A ground truth policy is also shown, usingall context variables and additional domain knowledge.Approach Ctrl. Fn. Cl. Task Solve % |A|Datascale-lin (ours) 2 skills Linear 100.00% (320) 1/1 569monopolicy-lin-sk 1 policy Linear 64.06% (205) 2 569monopolicy-lin-all 1 policy Linear 62.19% (199) 2 581ground-truth-policy 1 policy Nonlin. 100.00% (320) * –SCALE skill selection. In all SCALE approaches, the skills were complementary; using the entireskill library afforded greater coverage (greater task solve rate) than any single skill alone. For scale-lin, the skill selection distribution was almost even between K1(43.28%) and K2(56.72%), withK3never being chosen. The skill K3is dominated by the other two skills for this task, but K3could nonetheless be useful for a different downstream task. Empirically, it was observed that K1was chosen for shorter target block heights, whereas K2was used elsewhere (see Fig. 8). In thenonlinear case, only K2was selected.20Figure 8: Skill selection for the scale-lin approach for the block stacking task. Skill K1is generallyselected when h2is short, whereas taller h2values perform better with K2because h2⊆A. SkillK3is dominated by the other two skills and is not selected. Datapoints that were not solved arecrossed out.Policy and training data ablations. We provide additional experiments to investigate the effectof different policy functions and training data usage. The results are shown in Tab. 8, which ex-pands Tab. 2. For the linear function class, we conduct experiments with Bayesian ridge regression(B. ridge reg.) and ordinary least squares linear regression (OLS lin. reg.). Both linear policy func-tions used an intercept term and were trained using unnormalized data. For the nonlinear functionclass, we conduct experiments with a multilayer perceptron (MLP, 16x16x16 architecture usingReLU activations) and support vector regression with a radial basis function (RBF) kernel (SVR(RBF)). The nonlinear policy functions were trained with normalized data. Additionally, we presentablations in terms of training data usage. Methods ending in “-all” use the entire batch dataset. Forthe full-dimensional monopolicy approaches, the “-sk” ablation uses same training data as used bythe SCALE skills (340 samples). For the CREST baselines, the “-subs” ablation randomly downse-lects the batch dataset to the same number of samples used by SCALE (340 samples).In general, we see that SCALE generally outperforms the full-dimensional monopolicy methods andmatches the performance of the CREST baselines in most (but not all) cases. We see that increasingthe amount of training data available for the baselines usually improves performance. For the linearfunction class, both Bayesian ridge regression and ordinary least squares linear regression producedcapable approaches. For ordinary least squares linear regression, SCALE (scale-lin-ols) outper-forms the full-dimensional monopolicy on a sample-adjusted basis. For the nonlinear function class,the performance of approaches was lower overall. The similarity in performance of scale-nonlinto the full-data CREST baseline is strictly due to sample size; on a sample-adjusted basis, scale-nonlin is slightly more performant. However, for support vector regression with a RBF kernel,although SCALE (scale-nonlin-svr-rbf) exceeds the performance of the full-dimensional monopol-icy approaches, the CREST approaches perform more strongly (although modestly overall). Thus,we see some sensitivity for the nonlinear function class to the selection of policy function used forthis task.21Table 8: Task evaluation results for using the skill library Kblocks for the block stacking task for avariety of policy functions and training data ablations. This table expands upon Tab. 2. Ctrl. is theapproach control (skills or one monolithic policy). Fn. Cl. is the approach’s function class. P. Fn. isthe policy function. Task Solve % is the rate of task solves over the entire context space using theapproach. Methods within ±2%(the stochasticity of the simulator) of the best approach are bold.|A|is the quantity of input variables used for the approach’s policy. Data is the amount of trainingdata used for the approach. A ground truth policy is also shown, using all context variables andadditional domain knowledge. The abbreviation “mp” stands for monopolicy.Approach Ctrl. Fn. Cl. P. Fn. Task Solve % |A| Datascale-lin (ours) 3 skills Linear B. ridge reg. 90.49% (276) 4/5/6 340monopolicy-lin-sk 1 policy Linear B. ridge reg. 80.72% (247) 36 340monopolicy-lin-all 1 policy Linear B. ridge reg. 85.95% (263) 36 585crest-monopolicy-lin-subs 1 policy Linear B. ridge reg. 89.87% (275) 5 340crest-monopolicy-lin-all 1 policy Linear B. ridge reg. 89.87% (275) 5 585scale-lin-ols (ours) 3 skills Linear OLS lin. reg. 90.85% (278) 4/5/6 340monopolicy-lin-ols-sk 1 policy Linear OLS lin. reg. 83.33% (255) 36 340monopolicy-lin-ols-all 1 policy Linear OLS lin. reg. 90.16% (275) 36 585crest-monopolicy-lin-ols-subs 1 policy Linear OLS lin. reg. 90.52% (277) 5 340crest-monopolicy-lin-ols-all 1 policy Linear OLS lin. reg. 90.20% (276) 5 585scale-nonlin (ours) 3 skills Nonlin. MLP 63.40% (194) 4/5/6 340monopolicy-nonlin-sk 1 policy Nonlin. MLP 1.31% (4) 36 340monopolicy-nonlin-all 1 policy Nonlin. MLP 10.13% (31) 36 585crest-monopolicy-nonlin-subs 1 policy Nonlin. MLP 58.17% (178) 5 340crest-monopolicy-nonlin-all 1 policy Nonlin. MLP 60.78% (186) 5 585scale-nonlin-svr-rbf (ours) 3 skills Nonlin. SVR (RBF) 19.61% (60) 4/5/6 340monopolicy-nonlin-svr-rbf-sk 1 policy Nonlin. SVR (RBF) 1.63% (5) 36 340monopolicy-nonlin-svr-rbf-all 1 policy Nonlin. SVR (RBF) 7.19% (22) 36 585crest-mp-nonlin-svr-rbf-subs 1 policy Nonlin. SVR (RBF) 41.64% (127) 5 340crest-mp-nonlin-svr-rbf-all 1 policy Nonlin. SVR (RBF) 56.86% (174) 5 585ground-truth-policy 1 policy Nonlin. – 95.75% (293) * –I Sim-to-Real Block Stacking ExperimentIn this appendix, we demonstrate that the skills learned by SCALE are suitable for sim-to-real trans-fer. As skills are constructed using only the relevant causal variables, this is a form of structuralsim-to-real transfer. For this experiment, we evaluate the skill library Kblocks for a real block stack-ing domain with a Franka Emika Panda robot manipulator (Fig. 2c). This experiment is generallysimilar to task evaluation in simulation, except with a smaller subset of the context space. We assessthe SCALE approaches, scale-lin and scale-nonlin, against their monopolicy counterparts. We onlyconsider the “-all” monopolicy approaches, as they were generally better performing.I.1 Experimental SetupFor this experiment, a smaller subset of the context space is varied, as compared to the variationacross the entire context space as tested in Tab. 2. From a pool of 20 blocks, 5 were randomlychosen to be used for each experimental trial. The 20 blocks consisted of variations of 10 differentcolors and 2 different heights (5.7 cm or 7.6 cm). The length and width of the blocks were 4.2cm. The 5 randomly chosen blocks were placed into the Panda robot workspace and randomlyshuffled, producing variation in block x-position, y-position, and orientation. The table height hπwas determined from manual measurement and was not varied for this experiment.Perception. An Intel RealSense camera mounted to the robot wrist provided RGB-D perception ofthex-position, y-position, and orientation of the blocks in the workspace. A depth observation wascollected by commanding the robot above the workspace. This point cloud was then processed toyield five clusters via hidden point removal [55], RANSAC-based table plane fitting, and density-based clustering using DBSCAN [56]. Averaging the colors within each cluster yielded the blockcolor. A least-squares optimization procedure fit a cuboid of known length and width to each cluster,yielding the position and orientation of the blocks. Block height was provided by manual input22because of inaccuracies with estimation from depth alone. The camera extrinsics were obtainedvia computer-aided design models of the Panda robot and wrist mount, which were confirmed viamanual measurement. The camera intrinsics were used as directly reported by the camera.Control. TheFrankaPy library [57] is used to provide impedance-based control of the Panda robot.I.2 Experimental ResultsTable 9 presents the results. For each function class, the skill library learned by SCALE outperformsthe full-dimensional monopolicy baseline and is generally comparable to or slightly outcompetes theCREST monopolicy baseline. The ground truth policy matched the linear SCALE approach and isonly slightly better than the nonlinear SCALE approach. Compared to the task solve rate in simula-tion (Tab. 2), scale-lin performed consistently, and scale-nonlin had slightly better performance. Allbaseline approaches generally matched their evaluation in simulation, except for monopolicy-lin-all,which had a marked degradation. This may arise from domain differences between simulation andreality. Full-dimensional approaches are more susceptible to domain shifts due to their reliance onthe entire context space (all 36 variables), whereas SCALE approaches are compressed, using onlya minimal subset. Error was only loosely correlated with task solve rate, and likely explains the poorperformance of monopolicy-nonlin-all. Even though their errors were similar, it was observed thatmonopolicy-lin-all tended to underpredict the height needed to clear the target block as comparedto scale-lin. This caused the target block to be pushed away from where it should have been for thegoal position, leading to block stacking failures.For both scale-lin and scale-nonlin, skill K2was always chosen, as its precondition was on averagegreater than that of the other skills. Specifically, for scale-lin, the average preconditions were 58.88%forK1, 75.77% for K2, and 36.99% for K3. As the block heights used were only 5.7 cm and 7.6cm, it is reasonable to expect that skill K1would have been chosen more for shorter target blockheights (per Fig. 8). For scale-nonlin, the average preconditions were K1: 20.17%, K2: 51.84%,K3: 1.21%.Table 9: Sim-to-real evaluation results for using the skill library Kblocks for a real block stackingdomain. Table columns are as described in Tab. 2. Task Solve % is the rate of successful blockstacks. Error is the mean error ( ±1standard deviation) in meters between the block position whenthe block is ungrasped and the goal position determined at the beginning of the trial.Approach Ctrl. Fn. Cl. Task Solve % Error |A|scale-lin (ours) 3 skills Linear 90.00% (9) 0.010 ±0.003 4/5/6monopolicy-lin-all 1 policy Linear 50.00% (5) 0.008 ±0.003 36crest-monopolicy-lin-all 1 policy Linear 90.00% (9) 0.004 ±0.001 5scale-nonlin (ours) 3 skills Nonlinear 80.00% (8) 0.007 ±0.002 4/5/6monopolicy-nonlin-all 1 policy Nonlinear 10.00% (1) 0.093 ±0.040 36crest-monopolicy-nonlin-all 1 policy Nonlinear 70.00% (7) 0.013 ±0.012 5ground-truth-policy 1 policy Nonlinear 90.00% (9) 0.002 ±0.003 *J Skill Library Use in a Downstream Task: Stacking a Block TowerTo demonstrate the utility of re-using skills learned by SCALE, a follow-up experiment is conductedwherein the skill library Kblocks is used for a task in which it was not specifically trained: stacking ablock tower (Fig. 9). This long-horizon task can be decomposed into a number of sequential actionsthat must be performed correctly, so an approach that can capture the essence of a large problemand re-use smaller, modular components should perform best. Moreover, we do not perform anyadditional training or fine-tuning; we intentionally use the skills off-training data to test their gener-alization capability. This is a challenging task: in addition to the long-horizon precision involved,the skills are being evaluated increasingly out-of-distribution at each step, as the effective blockheights increase beyond what is seen in training.23(a) (b) (c) (d) (e)Figure 9: The block tower task. As previously, five blocks are initially available to the robot.However, after each stack attempt, the task does not reset. Instead, the block enumeration changes,so that the previous source block becomes the new target block. This happens four times, after whichthe task resets. The robot must complete each of the four individual steps successfully, as failure inany step renders the entire block tower task a failure. (a) Initial task scene. (b – d) Successful blockstacks for intermediate attempts. (e) A successfully stacked block tower.Experimental setup. For this experiment, we assume that the robot has access to a planner and ad-ditional domain knowledge as a part of this downstream task. We assume that the robot understandsthat at any step, the target block should be adjusted in the following manner. First, the target block’sx- and y-position should be substituted with the bottom-most block’s x- and y-position. Then, thetarget block’s height should be substituted with the sum of all heights of the previous blocks, plus asmall offset (1.5 cm). Effectively, this can be seen as treating each new step as stacking upon one,increasingly taller block. We leave the development of such a planner that can provide this additionalinformation for future work, but it suffices for this experiment that this information is available.Block tower results. Table 10 shows the results for stacking the block tower. For this experiment,we use the same linear and nonlinear approaches and baselines from Sec. 6.1, including the trainingdata ablations. Included is a ground truth policy with access to oracle information.Overall, we see that the scale-lin approach does best for stacking a tower with five blocks, althougha notable gap exists between the ground truth policy. However, a block tower success rate of 48.29%is not unreasonable, given that even the ground truth policy fails almost 30% of the time. The linearapproaches are all comparable for the first stacking step, and for the second step with a NB= 3tall tower, three baseline methods slightly outperform scale-lin. However, for the last two steps,baseline approaches become markedly less performant, leading to scale-lin emerging as the bestoverall approach despite modest performance in an absolute sense. Each step requires successivelygreater extrapolation out of the training data, so an approach that can capture the smaller processwell should perform best, assuming that this process also holds outside the training data. For thecase of the block tower, this is generally true, so the skills learned by scale-lin are best suited for thisdownstream task despite the challenge of generalization to yet-unseen data.For the nonlinear function class, performance across all approaches suffers beyond the first stackingstep, where the CREST baselines outperform scale-nonlin. The challenge of extrapolation for non-linear functions is evident here; the best linear approach for each step was better performing thanany nonlinear approach (and markedly so for taller towers). Thus, out-of-distribution generalizationis not observed for any nonlinear approach, whereas scale-lin exhibits modest performance in thisarea.For SCALE approaches, the skill selection rate is intriguing. The skill K1does not contain the targetblock height, which is likely why it was only selected during the first block stack attempts. However,K2continues to demonstrate its robustness, as it was used for all remaining block stack attempts inthe linear case and for all attempts in the nonlinear case. Its inclusion of target block height in AK2is in fact the reason this skill can extrapolate to taller towers. Like K2,K3also contains the blockheight, but this skill was generally dominated, and thus it is not surprising it was not selected.24In summary, in addition to the benefits of SCALE described previously for task learning, the capa-bility for SCALE to learn smaller, modular skills is evident in this experiment. Although out-of-distribution generalization was not observed in the nonlinear function class, we see that in principleSCALE does offer these benefits under certain conditions, such as in the linear case. We suggestthat this aspect of causal learning is often overlooked for experiments that only concern single-task learning. However, the benefits of modularity become advantageous for re-using behaviors fordownstream tasks at a later time in the robot’s operational lifetime.Table 10: Results for re-using learned behaviors in a representative downstream task: stacking ablock tower. The task solve percentage is shown for stacking a tower of at least NBblocks tall. Thesequence is executed in one attempt, so a fully stacked tower ( NB= 5) requires 4successful blockstacking attempts. Methods within ±2%(the stochasticity of the simulator) of the best approach ateach step are bold. For SCALE approaches, the skill selection rate at each step (not cumulative) isalso shown. The abbreviation “mp” stands for monopolicy.Approach NB= 2 NB= 3 NB= 4 NB= 5scale-lin (ours) 92.20% (272) 80.73% (222) 65.23% (167) 48.29% (113)K1K2K315.59% (46)84.07% (248)0.34% (1)0.00% (0)100.00% (275)0.00% (0)0.00% (0)100.00% (256)0.00% (0)0.00% (0)100.00% (234)0.00% (0)monopolicy-lin-sk 93.22% (275) 87.23% (239) 55.08% (141) 1.27% (3)monopolicy-lin-all 93.56% (276) 76.36% (210) 2.33% (6) 0.00% (0)crest-mp-lin-subs 93.20% (274) 85.40% (234) 5.84% (15) 0.00% (0)crest-mp-lin-all 93.92% (278) 85.51% (236) 5.84% (15) 0.00% (0)scale-nonlin (ours) 67.46% (199) 2.55% (7) 0.00% (0) 0.00% (0)K1K2K30.00% (0)100.00% (295)0.00% (0)0.00% (0)100.00% (275)0.00% (0)0.00% (0)100.00% (256)0.00% (0)0.00% (0)100.00% (235)0.00% (0)monopolicy-nonlin-sk 2.72% (8) 0.00% (0) 0.00% (0) 0.00% (0)monopolicy-nonlin-all 11.86% (35) 0.00% (0) 0.00% (0) 0.00% (0)crest-mp-nonlin-subs 84.75% (250) 27.37% (75) 0.78% (2) 0.00% (0)crest-mp-nonlin-all 75.59% (223) 11.31% (31) 0.00% (0) 0.00% (0)ground-truth-policy 96.25% (282) 90.48% (247) 83.14% (212) 69.96% (163)K Additional Details for Sensorless Peg-in-Hole Insertion ExperimentThis appendix serves to provide greater detail for the peg insertion experiment that was described inSec. 6.2.Reward function. Our reward function consists of two terms: 1) a penalty based on the Euclideandistance of the peg from the hole, and 2) a bonus of 10 for successful insertion. We also add aregularization term based on the norm of the policy parameters. The task is considered solved if thefinal reward Rfexceeds solved threshold RS= 8.SCALE skill K1.Unlike the other skills in Kpegthat were discovered by SCALE, skill K1hasan empty set of relevant variables. This is surprising as it is difficult to solve this task reliablywithout taking the help of one of the walls, in which case the wall should show up as a relevantvariable. However, we observed that K1actually localizes against 2 walls instead of just 1. Hence,when SCALE intervenes on any one of the two walls, the skill is still able to complete the assemblyby taking advantage of the other wall. In other words, our assumption that the context space isdisentangled does not hold in this case which leads to this erroneous relevant variable set. However,the precondition would limit where this skill would be applied, as skills K2−5are generally moreperformant.SCALE skill selection. For scale-lin, skills K2(48.44%) and K5(51.56%) were chosen nearlyequally. Conversely, the skill selection was more distributed for the nonlinear case: K2: 46.48%,K3: 35.16%, K4: 3.91%, K5: 14.45%. For both approaches, K1was not chosen as it was dominatedby the other skills.25Table 11: Task evaluation results for using the skill library Kpegfor peg insertion for a variety ofpolicy functions and training data ablations. This table expands upon Tab. 4. Ctrl. is the approachcontrol (skills or one monolithic policy). Fn. Cl. is the approach’s function class. P. Fn. is the policyfunction. Task Solve % is the rate of task solves over the entire context space using the approach.Methods within ±2%(the stochasticity of the simulator) of the best approach are bold. |A|is thequantity of input variables used for the approach’s policy. Data is the amount of training data usedfor the approach. The abbreviation “mp” stands for monopolicy.Approach Ctrl. Fn. Cl. P. Fn. Task Solve % |A| Datascale-lin (ours) 5 skills Linear B. ridge reg. 96.48% (247) 0/1/1/1/1 168monopolicy-lin-sk 1 policy Linear B. ridge reg. 67.19% (172) 8 168monopolicy-lin-all 1 policy Linear B. ridge reg. 62.50% (160) 8 210crest-monopolicy-lin-subs 1 policy Linear B. ridge reg. 66.80% (171) 1 168crest-monopolicy-lin-all 1 policy Linear B. ridge reg. 62.89% (161) 1 210scale-lin-ols (ours) 5 skills Linear OLS lin. reg. 96.88% (248) 0/1/1/1/1 168monopolicy-lin-ols-sk 1 policy Linear OLS lin. reg. 50.78% (130) 8 168monopolicy-lin-ols-all 1 policy Linear OLS lin. reg. 67.19% (172) 8 210crest-monopolicy-lin-ols-subs 1 policy Linear OLS lin. reg. 63.67% (163) 1 168crest-monopolicy-lin-ols-all 1 policy Linear OLS lin. reg. 60.55% (155) 1 210scale-nonlin (ours) 5 skills Nonlin. MLP 88.67% (227) 0/1/1/1/1 168monopolicy-nonlin-sk 1 policy Nonlin. MLP 18.36% (47) 8 168monopolicy-nonlin-all 1 policy Nonlin. MLP 12.89% (33) 8 210crest-monopolicy-nonlin-subs 1 policy Nonlin. MLP 56.64% (145) 1 168crest-monopolicy-nonlin-all 1 policy Nonlin. MLP 55.47% (142) 1 210scale-nonlin-svr-rbf (ours) 5 skills Nonlin. SVR (RBF) 94.53% (242) 0/1/1/1/1 168monopolicy-nonlin-svr-rbf-sk 1 policy Nonlin. SVR (RBF) 53.52% (137) 8 168monopolicy-nonlin-svr-rbf-all 1 policy Nonlin. SVR (RBF) 58.20% (149) 8 210crest-mp-nonlin-svr-rbf-subs 1 policy Nonlin. SVR (RBF) 57.81% (148) 1 168crest-mp-nonlin-svr-rbf-all 1 policy Nonlin. SVR (RBF) 60.94% (156) 1 210Policy and training data ablations. As with the block stacking domain, we conducted experi-ments with several policy functions and training data ablations. Table 11 details the experimentalresults, which expand upon Tab. 4. In the linear function class, two policy functions were investi-gated: Bayesian ridge regression (B. ridge reg.) and ordinary least squares linear regression (OLSlin. reg.). An intercept term was used for both approaches, and the training data were unnormal-ized. In the nonlinear function class, experiments were conducted with a multilayer perceptron(MLP, 16x16x16 architecture using ReLU activations) and support vector regression with a radialbasis function (RBF) kernel (SVR (RBF)). For the nonlinear policy functions, the training data werenormalized. Methods with the “-all” suffix use the entire batch dataset. For the full-dimensional mo-nopolicy approaches, the “-sk” suffix indicates that the same training data as SCALE was used (168samples). The “-subs” suffix for the CREST baselines denotes that the batch dataset was randomlydownselected to the same number of samples used by SCALE (168 samples).Overall, we observe that the SCALE skills are highly performant across function class and policyfunction type. Moreover, SCALE significantly outperforms both the full-dimensional monopolicyapproaches and the CREST baselines. Indeed, SCALE exceeds the performance of the baselines byaround 30% for each policy function type. The success of SCALE is attributed to capturing the fourmodes in the data — localizing against each of the four walls — found by exploiting the underlyingcausal structure. The baselines, which are agnostic to such structure, do not leverage this propertyand are therefore limited. Unlike in the block stacking domain, we see that the effect of training datasize does not necessarily yield an increase in performance for the baseline approaches.L Sensorless Peg-in-Hole Insertion: Domain Shift ExperimentWe evaluate the generalization capability of SCALE by evaluating it under a domain shift. All tasksare generated by uniformly sampling the relative position of the center of each wall with respect tothe hole from a given range. The ranges used to generate the training and test tasks are specified in26Tab. 12. We transfer all the policies zero-shot to the test distribution. However, we do re-learn thepreconditions of the scale-lin policies for the test distribution.The evaluation results are summarized in Tab. 13. All approaches witness a sharp drop in perfor-mance. This is expected as (a) the test tasks are not guaranteed to be feasible and (b) the rangesused to generate the test task are more than double those used in training. However, our multi-skillapproach scale-lin performs much better than the baselines. This highlights a key benefit of learningmultiple skills. A skill may perform well on the training distribution, but it can be rendered invaliddue to an unforeseen domain shift. Having a repertoire of different skills allows the robot to stillcomplete the task by switching to a different skill. This makes our multi-skill approach more robustthan single-skill approaches.Table 12: Training and test distributions of the domain shift experiment in the sensorless peg-in-hole domain. The relative position of the center of each of the 4 walls is uniformly sampled fromthe given (min,max) range. The ranges used to generate test tasks are more than double the rangesused to generate training tasks in the domain shift experiment. All values are in meters.Train Testx-min x-max y-min y-max x-min x-max y-min y-maxWall 1 0.01 0.05 -0.02 0.02 -0.04 0.10 -0.07 0.07Wall 2 -0.02 0.02 -0.05 -0.01 -0.07 0.07 -0.10 0.07Wall 3 -0.02 0.02 0.01 0.05 -0.07 0.07 -0.04 0.10Wall 4 -0.05 -0.01 -0.02 0.02 -0.10 -0.04 -0.07 0.07Table 13: Task evaluation results under domain shift for sensorless peg-in-hole insertion. We eval-uate only linear policies as nonlinear policies perform worse in this domain. Table columns are asdescribed in Tab. 4.Approach Ctrl. Fn. Cl. Task Solve % |A|scale-lin (ours) 5 skills Linear 64.84% 0/1/1/1/1monopolicy-lin-all 1 policy Linear 44.92% 8crest-monopolicy-lin-all 1 policy Linear 39.83% 1M A Primer on CausalityFor readers who are unfamiliar with causality, this appendix serves as a gentle “on-ramp” for under-standing SCALE.What’s a data generating process? A data generating process (DGP) is a dynamical process thatgenerates data in a physical system. The process is usually described by variables that characterizethe system. Consider the following examples: turning a light switch on a lamp to illuminate thelightbulb; inserting a car key into an ignition and turning the starter to start a vehicle; rain showerscausing rainfall. These examples can be considered data generating processes if system variableswere instrumented, such as instrumenting a rain gauge to measure rainfall.What’s a Structural Causal Model? A Structural Causal Model (SCM) [25, 26] is a representationof a data generating process. Usually, the SCM consists of variables of a system, a graph (which isusually directed with no cycles) that describes how the variables depend on each other, and functionsthat describe how each variable is characterized based on that variable’s causes. These functions arealso called structural equations or functional equations, and each function will have its own noisevariable. Noise variables (also referred to as exogenous variables) are generally jointly independent.What’s an example of an SCM, and how can it be used? Consider the following example of SCMC1:•X:=NX•Y:= 2X+NY27Here, XandYare variables of our SCM, and NXandNYare the noise terms. This SCM canalso be characterized by its underlying graph, where X→Ybecause Xis a cause of Y. For thisexample, consider that NXandNYare (independently) sampled from the uniform distribution from-10 to +10. Then, if NX= 2 andNY=−3, then by the mechanics of the SCM, X= 2 andtherefore Y= 1.We now introduce the concept of an intervention, where we setthe value of a variable to be aparticular value (usually regardless of its causes or noise variables), holding all other variables equal.We can formalize this using the dooperator [25]. Thus, an intervention do(Y= 5) means that nomatter what value NX,X, orNYtake,Y= 5. In the previous example, under this intervention, ifNX= 2, then X= 2, butY= 5 (and not 1). This type of intervention is called “hard” since itinduces a structural change; other intervention types are possible, such as “soft” interventions wherethe functional equation of a variable changes (but not its parents).What’s the difference between a DGP and an SCM? In the case where the SCM captures the DGPexactly, there is no difference. However, often times we wish to learn the data generating process,and the SCM encodes the knowledge of the DGP that is currently known. In these cases, the SCMis an approximation of the underlying DGP in the physical world.In SCALE, what’s the Data Generating Region and how does it differ from a DGP? The Data Gen-erating Region (DGR) introduced by SCALE provides locality to the data generating process. Con-sider a physical system where SCM C1co-exists with the following new SCM, C2:•X:= 3NX•Z:=−X+NZHowever, it is also noticed that according to a fourth variable A, when A < 0,C1applies, whereaswhen A > 0,C2applies. The condition where these causal models apply is equivalent to how theDGR specifies where particular skills are defined in the context space. Note that X,Y, andZare notneeded to define where the models apply (only A). A learning algorithm could use all four variablesto specify where the models apply, but a minimal, compressed representation only requires one ( A).Moreover, ZandAare not needed to specify the mechanics of C1(similarly, YandAforC2).This is similar to how SCALE learns which variables of the context space to use for modeling theskill policy. Even though a learner could potentially use all four variables it knows about, irrelevantvariables are not needed in a minimal representation.I’m interested in learning more about causality. Where should I start? There are many important anduseful textbooks in this area. We use Pearlian causality and SCMs as the basis of our formalism, sowe recommend the reader reviewing Causality (Pearl 2009, 2nd edition) [25], in particular, chapters1–3. Then, we recommend the reader reviewing Elements of Causal Inference (Peters, Janzing, andSch ̈olkopf, 2017) [26], in particular, chapters 1, 3, and 6.28 |
DYPOvNot5F | Diff-LfD: Contact-aware Model-based Learning from VisualDemonstration for Robotic Manipulation via DifferentiablePhysics-based Simulation and RenderingXinghao Zhu1Jinghan Ke2Zhixuan Xu3Zhixin Sun4Bizhe Bai5Jun Lv6Qingtao Liu3Yuwei Zeng7Qi Ye3Cewu Lu6Masayoshi Tomizuka1Lin Shao71UC Berkeley,2USTC,3Zhejiang University,4Nanjing University5University of Queensland,6Shanghai Jiaotong University,7National University of SingaporeAbstract: Learning from Demonstration (LfD) is an efficient technique for robotsto acquire new skills through expert observation, significantly mitigating the needfor laborious manual reward function design. This paper introduces a novel frame-work for model-based LfD in the context of robotic manipulation. Our proposedpipeline is underpinned by two primary components: self-supervised pose andshape estimation andcontact sequence generation . The former utilizes differen-tiable rendering to estimate object poses and shapes from demonstration videos,while the latter iteratively optimizes contact points and forces using differentiablesimulation, consequently effectuating object transformations. Empirical evidencedemonstrates the efficacy of our LfD pipeline in acquiring manipulation actionsfrom human demonstrations. Complementary to this, ablation studies focusingon object tracking and contact sequence inference underscore the robustness andefficiency of our approach in generating long-horizon manipulation actions, evenamidst environmental noise. Validation of our results extends to real-world de-ployment of the proposed pipeline. Supplementary materials and videos are avail-able on our webpage: https://sites.google.com/view/diff-lfd.Keywords: Learning from Visual Demonstration, Model-based Robotic Manipu-lation, Differentiable Physics-based Simulation and Rendering1 IntroductionLearning from Demonstration (LfD) empowers robots to acquire policies from expert demonstra-tions, such as those available on YouTube [1], which can reduce the human effort involved in roboticskill learning [2, 3]. This paper delves into the development of a model-based LfD pipeline thatemploys raw RGB videos as inputs. While model-based learning approaches have been acknowl-edged for their potential for superior sample-efficiency and generalization compared to model-freeapproaches [4–6], model-based LfD remains under-explored. Several major challenges hinder thewide application of model-based LfD in the physical world.One challenge is how to automatically and efficiently develop a model that scales to high-dimensional input such as raw images or videos [7]. To tackle this, we introduce a self-supervisedmodeling pipeline that leverages recent advancements in differentiable rendering and signed dis-tance functions. This pipeline estimates both the geometric shape of the object and its associated 6Dposes, forming an explicit representation. A second challenge lies in enabling robots to effectivelyutilize physical models to generate efficient policies. This is particularly critical for robots operatingin real-world contact-rich manipulation tasks where the physical interaction between the robot andits environment is a key factor [8, 9]. To address this, we develop a hierarchical LfD frameworkthat integrates low-level modules for contact-point localization and contact-force optimization withCorrespondence to zhuxh@berkeley.edu andlinshao@nus.edu.sg7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.a high-level module for contact sequence planning. These modules work in concert to plan manip-ulative actions. To ensure robust and real-time deployment, we further incorporate a neural policydesigned to imitate the outcomes of planning algorithms. This enables the robot to execute complextasks with high reliability and efficiency.We have evaluated our pipeline on two datasets, including the sth-sth dataset [10] containing basicmanipulation actions on various objects and a small recorded video dataset showing a human per-forming dexterous in-hand manipulation with primitive objects. The results, derived from rigoroussimulation and real-world experiments, bear testament to the effectiveness of our proposed pipeline.Our key contributions can be summarized as follows: 1) introduce a novel framework for model-based learning from visual demonstrations, 2) provide a self-supervised approach for pose estimationand shape reconstruction, utilizing the differentiable rendering, 3) develop a hierarchical policythat combines the low-level contact-point localization and contact-force optimization based on thedifferentiable simulation and high-level contact sequence planning, with neural imitation learningfor efficient and robust real-world execution, 4) conduct comprehensive experimental validation ofour algorithms in both simulated and real-world environments to demonstrate their efficacy androbustness.2 Diff-LfD FrameworkOverview. Given a demonstrated RGB video consisting of Nframes denoted as V={It}Nt=1,we prepossess the video to segment and identify the most relevant objects with masks {Mt}Nt=1,exploiting the SAM [11]. The local frame of the object is randomly defined at the first frame.Our Diff-LfD calculates the object’s relative pose transformation in the demonstration and jointlyestimates the object’s mesh Oand the associated 6D poses {Pt}Nt=1at each frame. If the robotis provided with a similar but different object from the object recorded in the video, we align thepose of the provided manipulated object with the reconstructed object. Next, our pipeline infers thewrench (a combination of external forces and torques) required to complete the pose transformationacross two consecutive time steps and generates feasible robot actions to accomplish the pose trans-formation. This planning includes both the low-level contact-point localization and contact-forceoptimization and high-level contact sequence planning to chain the whole (long-horizon) manipu-lation sequences. The manipulation actions are then utilized to train a neural network for robustreal-world execution and generalization. We provide an explanation of the pipeline, wrench , andobject alignment in Appendix.2.1 Pose and Shape Estimation with Differentiable SDFThis subsection introduces the pipeline for pose estimation and shape reconstruction from rawvideos. We adopt the differentiable SDF ( Diff-SDF ) [12] to represent the object geometry, which hasa large representation capability to model diverse objects with various topology structures. More-over, Diff-SDF enables smooth image-based optimization to the images due to its inherent con-vexity [12]. The explicit surface mesh Ocan be extracted from the SDF using the marching cubemethod [13].Ideally, given an initialization of an SDF parameterized by φand its associated 6D poses at eachframe {Pt}Nt=1, the differentiable renderer Rproduces a sequence of images {I(φ,Pt)t}Nt=1={R(φ,Pt)}Nt=1.Diff-SDF optimizes the SDF parameters to reconstruct the object shape by reducingthe reconstruction loss:LR=NXt=1||I(φ,Pt)t− It|| (1)However, this approach encounters difficulties when applied to real-world videos due to the fol-lowing reasons: unknown camera poses andlack of views for unseen regions . Because the SDFoptimization presumes that camera poses of Itare known in advance, which is not valid for real-world videos where camera poses are not provided. To estimate camera poses {P−1t}Nt=1, we employdifferentiable rendering to hierarchically produce an explicit surface mesh ˆOwith texture denoted2Figure 1: The proposed model-based learning from demonstration (LfD) pipeline can be divided intotwo primary components. The top part focuses on object shape reconstruction and pose estimation,employing differentiable mesh rendering and signed distance function (SDF) (Section 2.1). Thebottom part illustrates the process of contact-aware hierarchical manipulation planning, involvingcontact point localization and differentiable wrench optimization (Section 2.2).asˆT, and jointly estimate the objects poses {ˆPt}Nt=1over multiple images. Details are provided inthe Appendix.We denote the optimized mesh, textures, and poses from differentiable rendering as ˆO∗,ˆT∗,{ˆP∗t},respectively, and their associated rendered images ˆI∗t=R(ˆO∗,ˆT∗,{ˆP∗t})as:ˆO∗,ˆT∗,{ˆP∗t}= arg minˆO,ˆT,{ˆPt}NXt=1∥ˆIt− It∥where ˆIt=R(ˆO,ˆT,{ˆPt}) (2)The quality of mesh ˆO∗is usually not satisfying. We then optimize the Diff-SDF to get an optimizedSDFφ∗by setting the camera pose to be {ˆP−∗t}to reduce the projection loss:φ∗= arg minφNXt=1∥I(φ,ˆP∗t)t−ˆI∗t∥where ˆI∗t=R(ˆO∗,ˆT∗,{ˆP∗t}) (3)After the Diff-SDF optimization, the resulting surface mesh ˆO∗∗is then extracted from the SDF φ∗.The ˆO∗∗is then leveraged to optimize the object poses {ˆP∗∗t}as below. The process in Eqn 2-4iterates until we get a small loss below a given threshold or reach the maximum iteration number.{ˆP∗∗t},ˆT∗∗t= arg min{ˆPt},ˆTNXt=1∥ˆIt− It∥where ˆIt=R(ˆO∗∗,ˆT,{ˆPt}) (4)Although each video contains multiple frames, there are still cases that lack sufficient views, re-sulting in poorly reconstructed unseen regions of the object. To address the incomplete views, weadopt a diffusion model [14] to infer the unseen areas. Our model takes the first real image I1witha known camera pose and synthesizes images from different viewpoints around the object. We thencombine these synthetic views with others for a complete object-shape reconstruction.2.2 Contact-Aware Manipulation PolicyBuilding on the estimated object’s pose and shape, this subsection delves into the process of ma-nipulating an object between two consecutive poses PtandPt+1. If the robot is provided with a3similar but different object from the object recorded in the video, we align the pose of the providedmanipulated object with the reconstructed object, with details in Appendix.Our framework employs a hierarchical structure consisting of low-level modules for contact-pointlocalization and contact-force optimization, as well as high-level contact sequence planning. Thelow-level modules serve dual purposes: contact-point localization allows the robot to establish newcontacts while keeping the object stationary, whereas contact-force optimization enables the robot tomanipulate the object toward its target and maintain stable contact. These low-level actions are thenorchestrated by the high-level contact sequence planning module to form a cohesive sequence ofactions. To facilitate efficient and robust real-time deployment, we also incorporate a neural policydesigned to imitate the planned trajectories.Contact point localization. Contact point localization enables the robot to change contact pointson the object, which encompasses two critical steps: the generation of the transition target andtheexecution of the transition . The transition target is calculated analytically with the desired objecttransformation wrench W. Wrench Wis located at the objects’ center and represents the necessarywrench to facilitate the transformation from PttoPt+1. It is determined using a Proportional-Derivative (PD) controller: W=kp∗(Pt+1− Pt)−kd ̇Pt+g, where kp, kdare the proportionaland derivative gains, and gsignifies the gravitational and external forces acting upon the object.Following this, we use an enumeration process to identify all plausible contact combinations thatcan generate the desired wrench W. Initially, all potential contacts are assessed to single out thosecapable of producing the desired object wrench Wthrough contact points {pi}fori∈[1..n]. Thenumber of contact points nis pre-determined based on the manipulation task. Further filteringprocesses are implemented to ascertain contacts that meet kinematic and stationary constraints: theinverse kinematics are solved to verify kinematic feasibility, and only one contact point can move ata given time while the remaining contact points hold the object immobile. Although multiple contactpoints could theoretically move while maintaining the object stationary, we found that planning isconsiderably more complex due to the enlarged search space, and the objects are prone to unintendedmovement due to execution errors. To execute the transition, we use the Rapidly-exploring RandomTree (RRT) motion planner to generate a feasible trajectory for the moving contact and the gravitycompensation wrench on the remaining contacts to hold the object. More details of the contact pointlocalization are provided in Appendix.Contact wrench optimization. Once the contact points p={pi}are determined, the robot exertscontact wrenches Wp={Wpi}at contact points to manipulate the object toward its target. Theobjective and loss functions to optimize the contact wrenches are defined in Eq. 5.minWpL(Wp) =λP∥P′⊖ Pt+1∥+λv ̇P′+λW∥Wp∥subject to P′=F(P,Wp)(5)where Frepresents the contact dynamics, P′is the 6D object pose after applying the wrench Wpfrom the initial object pose P.Pt+1is the target object pose, ⊖represents subtraction for 6D poses. ̇P′represents the object’s velocity and is added to damp the object’s speed, making the manipulationmore stable [15]. λP, λv, λWare hyperparameters standing for loss weights.We propose using gradient-based methods to optimize the objective function, Eq. 5, with differen-tiable simulation in Nimble Physics[16] to approximate the forward dynamics F. We use s= (P, p)as the concatenated state of the system in the following of this paper. The gradient of the objectivecan be computed as ∇=∂L∂Wpfrom the simulation. However, gradients near the contact are oftennonlinear, sensitive, and discontinuous, posing challenges for vanilla gradient descent optimizationmethods. To address this issue, this paper draws inspiration from [17–20] and proposes computingthe gradient expectation at each point with Gaussian noises, as shown in Eq. 6. The contact wrenchis then updated using a step size αalong the gradient direction. In this paper, we use the analyticalcontact wrench computed during the contact point localization as the initial solution point for theoptimization process.∇=Ens,nW∼N∂L(F(s+ns,Wp+nW),Wp+nW)∂Wp(6)4Algorithm 1 Global Planning for Manipulation Sequences1:Input: s0= (Pt, pt), target object pose Pt+12:Output: R={s}3:Q ← { s0},R ← { ∅} ▷Init.4:whileQis not empty do5: s←SelectNode (Q)6: ifIsSuccess (s,Pt+1)then7: Return R ▷Exit if success8: end if9: s′←OptWrench (s,Pt+1) ▷Opt. wrench10: ifOptIsSuccess (s, s′)then11: Q ← Q ∪ s′;R ← R ∪ s′12: else13: S ←ContactLoc (s,Pt+1)▷Loc. contacts14: fors′∈ Sdo15: Q ← Q ∪ s′,R ← R ∪ s′16: end for17: end if18:end while19:Return RHigh-level planning. Our globalcontact sequence planning, as de-tailed in Algorithm 1, employs hi-erarchical planning to identify vi-able manipulation sequences, utiliz-ing the previously introduced contactpoint localization ( ContactLoc )and contact wrench optimization(OptWrench ). While the exertionof contact wrenches allows the robotto perform manipulation tasks involv-ing nearby target poses, switching be-tween multiple contacts is necessarywhen dealing with distant targets dueto kinematic limitations.Every node sin the planning tree en-capsulates the object pose Pand con-tactp. The tree begins with a startnode that represents the initial objectpose and robot contact, with the goalof reaching the target object pose Pt+1. At each iteration, a node sis chosen and expanded us-ingContactLoc orOptWrench following A* search [21]. If the object has been successfullymanipulated through the exertion of an optimized contact wrench ( OptIsSuccess ), the resultingnode s′, containing the manipulated object pose and contact points, is expanded. Conversely, if theexertion fails, contact localization is performed by identifying a new set of contacts and expandingthem in the tree. This planning algorithm continues until either the target object pose is reached(IsSuccess ) or all nodes within the tree have been explored. The search procedure we proposefocuses on optimizing contact wrenches at first and resorts to locating new contacts only if the ex-ertion fails. Although this approach narrows down the search space, it may also prune potentiallyvalid and optimal paths. For instance, transiting contacts before reaching the kinematic limit mightresult in a shorter trajectory with fewer contact switches. To address this, we introduce a randomchance for each node to transition contacts, regardless of the wrench optimization outcome. Thisdesign promotes exploration within the planning process, enabling a more comprehensive discoveryof the entire search space.Sim2Real: closed-loop policy with domain randomization. Despite its efficacy in generating vi-able manipulation trajectories, the above-described planning algorithm is computationally demand-ing as it requires online enumeration of contacts and wrench optimization, rendering it unsuitablefor real-time applications. To surmount this challenge, we utilize deep learning to approximatethe manipulation policy. We leverage a fully connected network to learn the robot control com-mands that were derived from the high-level planning algorithm. This network ingests the objectpose and joint angles as inputs and outputs of robot joint torques. These torques are obtained bymapping the contact wrenches into joint torques, courtesy of the Jacobian. Our training dataset isgenerated by solving the planning problem under conditions of noisy initial and target positions andperturbed system dynamics. This process results in a set of state-torque training pairs. We furtheraugment each sample by introducing noise into the states and optimizing the joint torques to adhereto the planned trajectory. It’s noteworthy that for domain randomization, we optimize the contactwrench to reach the next state along the trajectory rather than solving the original planning prob-lem with a distant target. This makes the data augmentation process more efficient. In a bid tofurther enhance performance, we fine-tune the network within a Markov Decision Process (MDP)framework using the REINFORCE algorithm [22]. During the fine-tuning process, the state andaction spaces maintain the same setup as described earlier, while the reward function is defined asr(s, τ) =−λP∥s⊖ Pt+1∥−λv∥ ̇s∥−λW∥τ∥, where Pt+1is the target object pose, and ̇sdenotesthe object velocity [15]. λP, λv, λτare hyperparameters.5Figure 2: Experimental resultsfrom sth-sth (1st & 2nd rows) and in-hand object manipulation (3rd & 4throws).Pull Right Pull Left Push Right Push LeftBaseline [23] 0.976 0.992 0.994 0.946Ours 1.000 1.000 1.000 1.000Figure 3: Baseline comparisons on LfD framework. Each cell rep-resents the success rate of the manipulation.RRT CITO PGDM iLQR OursBall 122±20 7 .2◦52±8 16 .7◦2.14±0.4 2 .4◦57±10 11 .2◦62±12 2 .6◦Cube 136±16 9 .0◦60±7 18 .5◦2.16±0.3 4 .1◦70±19 13 .3◦78±16 3 .8◦Capsule 127±24 8 .4◦63±4 15 .2◦2.18±0.3 9 .3◦80±6 12 .2◦81±8 7 .7◦Figure 4: Baseline comparisons on contact-aware manipulation pol-icy. The first element in each cell is the mean/variance for the com-putation time ( s); the second is the difference between the target andfinal object rotation (◦).3 ExperimentsThis section offers both quantitative and qualitative assessments of our proposed methodology. Ourexperiments are designed to address the following research questions: 1) How does our Diff-LfDframework compare to baselines that also rely on visual demonstrations? 2) What is the efficacyof our contact-aware manipulation algorithm in generating long-horizon trajectories? 3) Is our ap-proach feasible for deployment in real-world scenarios? 4) How accurate is our self-supervisedobject reconstruction and tracking? 5) What is the utility of the views synthesized by the diffusionmodel? 6) What impact do gradient-based optimization, global planning, and random contact transi-tion have on performance? 7) How robust are the generated trajectories and the closed-loop policy?We conducted evaluations in two distinct experimental settings: basic manipulation tasks involvingprimitive objects and more complex in-hand object manipulation tasks.Experimental setups and ablation studies addressing questions 4-7 are elaborated in Appendix.Baseline comparisons on LfD framework. We compare our model-based approach with themethod introduced in [23]. Petr ́ık et al. [23] presents an optimization-based method to estimatea coarse 3D state representation, using a cylinder for the hand and a cuboid for the manipulatedobject(s). Such coarse approximation limits the representation capability and the quality of the stateestimation. We utilize our object reconstruction and tracking to estimate the object trajectory anduse contact planning to find a path. We select videos of 4 classes from the sth-sth dataset [10]: ”PullRight” with 164 videos, ”Pull Left” with 130 videos, ”Push Right” with 89 videos, and ”Rush Left”with 253 videos. We report the results as in Fig. 3. Our approach successfully finished all the classesand slightly outperformed the method introduced in [23]. One explanation is that these four types ofvideos are simple for our proposed pipeline to imitate. Thus, we also apply our method for in-handmanipulation tasks from raw videos to test the limits of our proposed framework. Fig. 2 shows themanipulation trajectory in two environments generated by our method.Baseline comparisons on shape reconstruction and pose estimation. In contrast to otherlearning-based approaches for shape reconstruction and pose estimation, such as Neural RadianceFields (NeRF), our perception module operates under a distinct task setting. Specifically, our inputconsists of a single-object RGB video featuring objects that undergo both rotation and translation.Most NeRF-based methods, on the other hand, rely on multiple static object poses with knowncamera positions. Some NeRF implementations utilize COLMAP [24] to initialize camera poses.However, this approach is less effective in our setting, where the background remains largely un-changed, and the frame count is limited. These factors hinder COLMAP’s ability to accuratelyestimate object poses, leading to unstable object surface reconstructions from NeRF. Further exper-imental comparisons with Nope-NeRF [25], which also employs COLMAP for initialization, areavailable in the webpage. Our findings indicate that Nope-NeRF fails to converge in more thanhalf of the test cases (5 out of 9), resulting in empty reconstructions. The remaining cases yieldedincorrect pose estimations and reconstructions when compared to our method.6Baseline comparisons on the contact-aware manipulation policy. In this study, we evaluate ourcontact-aware trajectory planning algorithm against four established baselines within the contextof in-hand object rotation tasks. The baselines are as follows: 1) The Rapidly-Exploring RandomTree (RRT) planner, as outlined in [17], employs random sampling within a configuration spacedefined by both robot joint positions and object poses to identify feasible trajectories. 2) Contact-Implicit Trajectory Optimization (CITO) [26] first establishes a predefined trajectory for the object,then identifies optimal contact points along this path before calculating the requisite control inputsfor trajectory tracking. 3) Pre-Grasp Informed Dexterous Manipulation (PGDM) [27] utilizes rein-forcement learning to train manipulation agents, incorporating pre-computed grasp data to achievethe desired manipulation trajectory. 4) The Iterative Linear Quadratic Regulator (iLQR) [17] em-ploys local approximations of the dynamical system to iteratively solve for optimal manipulationstrategies through quadratic planning. For the purposes of this experiment, our algorithm operateswithout the closed-loop policy detailed in Sec. 2.2. We apply baselines on three in-hand manipula-tion tasks associated with the ball, the cube, and the capsule. We adopt two evaluation metrics: theaveraging planning time and the difference between the target rotation and the final object rotation.Results are reported in Fig. 4.While both PGDM and iLQR boast the quickest inference times, it’s crucial to highlight that PGDMrequires approximately 5 hours of training for each task, and iLQR suffers from a higher track-ing error compared to our method. The RRT approach uniformly expands its search tree, therebyincreasing the probability of encountering unstable contacts and consequently requiring the mosttime to complete the task. In contrast, our algorithm and CITO focus on a more constrained searchspace where stable grasping is feasible, thereby simplifying the search complexity. Furthermore,our empirical results indicate that the final error rates for all baseline methods were consistentlyhigher than our algorithm. Specifically, the RRT approach lacks a guarantee for optimal trajectorysampling, CITO overlooks physical dynamics during the planning phase, and iLQR struggles withoptimization over nonlinear loss contours. These limitations render the baseline methods susceptibleto failure due to dynamic uncertainties and execution errors.Figure 5: Allegro Handperforms in-hand objectmanipulations.Real-world experiments. We conducted real-world experiments for in-hand object manipulation. The experimental setups are illustrated inFig. 5. We trained the closed-loop policy as discussed in Sec. 2.2 toimitate a human rotating a cube and deployed it as the robot controller.Experimental videos are available in the webpage. This network receivesthe current joint angles of the robot, and the current object poses as in-put and outputs the joint torques. We performed the in-hand manipula-tion task with the Allegro Hand for different primitive objects and initialposes. During supervised learning, convergence of the policy is achievedin approximately 9.3 minutes, while fine-tuning takes an average of 50.1minutes. The results of the difference between the target and final objectrotation errors are Cylinder (3.8◦), Ball (2.4◦), Lemon (6.3◦), and Avo-cado (5.9◦), which further underscore the ability of our closed-loop policyto generalize across similar but distinct geometries.Additional experiments covering topics such as object reconstruction and tracking, the efficacy of thediffusion model, gradient-based optimization techniques, random contact transitions, and robustnessanalyses are provided in Appendix.4 Related WorkLearning from visual demonstration. We focus our review on the approaches that utilize visualdata or adopt a model-based approach. For a broader review, we refer readers to [28]. One line ofwork in LfD [29–35] learns the cost and reward function from visual demonstrations. To extractknowledge from images/videos, various works [36–42] adopt representation-learning approaches todistill low-dimensional latent states and action representations. Developing explicit representationfor LfD has received little attention due to the limited representation capability. Petr ́ık et al. [23]7adopt coarse 3D cylinders and cuboids to present the hand and manipulated objects. To tackle therepresentation capability issue, we propose a self-supervised modeling pipeline that estimates thefine-level geometric shape of the object and its associated 6D pose sequences from raw videos.Moreover, our approach infers the contact forces underlie these poses, contributing to model-basedlearning algorithms.Differentiable simulation and rendering in robotic manipulation. Recently, great progress hasbeen made in the field of differentiable physics-based simulation and differentiable rendering [43–58], which for a broader review, please refer to [59, 60]. These differentiable tools have been appliedin robotic manipulation tasks [20, 61–69]. We use the differentiable physics simulation to optimizethe contact forces for in-hand manipulation tasks and proposed an iterative pose estimation and shapereconstruction pipeline from raw RGB videos via the differentiable rendering [70] and differentiablesigned distance functions [12].Model-based manipulation. The use of contact dynamics often leads to non-convex optimizationproblems, causing difficulties in finding local optima due to the discontinuity introduced by con-tact switching [19, 71, 72]. Contact-implicit trajectory optimization (CITO) [73–76] addresses thisissue by planning manipulation actions without a pre-specified contact schedule. Chen et al. [26]further considers finger gaiting primitive in trajectory planning but assumes reachable states and pre-defined object trajectories, leading to potential failures due to ignorance of dynamical restrictionsand control errors. Pang et al. [17] uses a convex quasi-dynamics model with a rapidly exploringrandom tree (RRT) to directly search feasible manipulation actions at the dynamical level, althoughthe manipulated object lies on a surface and doesn’t require consideration of gravity. Our workplans manipulation actions directly at the dynamical level to address system noises and utilizes dif-ferentiable physics simulations for contact optimization and contact localization for efficient search.5 LimitationFor the object shape reconstruction and pose estimation, we assume that the RGB videos are seg-mented, and the majority of the mass is concentrated as its geometry center. Our pipeline currentlyworks only with rigid bodies, not with articulated rigid bodies or deformable objects. Although weleverage the diffusion model to mitigate the occlusion and reduce shape reconstruction uncertainty,the quality varies for real RGB videos when testing our pipeline in the wild. For contact-awaremanipulation, we focus on tasks that require the object to move along a demonstrated trajectory.Generalizing the algorithm to other tasks with sparse rewards will be left for future work. Ourapproach relies on the differentiable physics-based simulation to generate the contact wrench withdomain randomization to reduce the sim2real gap. Complicated physics interaction might fail tobe captured by the differentiable physics simulation. We are interested in adding residual/learninglayers to augment the differentiable physics simulation to align with the real world in future work.6 ConclusionThis paper investigates the use of model-based learning from demonstrations for robotic manipula-tion tasks, contributing several significant aspects to the field. First, we introduce a new frameworkfor learning from human visual demonstrations in a self-supervision manner, which has the potentialto generate robot skills at a large scale. Second, we utilize differentiable rendering to track objectposes in a self-supervised manner. Third, we design a high-level planning framework that employsdifferentiable simulations to generate long-horizon contact actions. This includes inferring and tran-sitioning contact points, optimizing contact forces, and exerting them. The manipulation trajectoriesare then approximated by a neural network. Finally, we conduct experiments to evaluate the effec-tiveness of our approach from multiple angles. Our results demonstrate the robustness and efficiencyof our proposed method to learn from human demonstrations and outperform existing approachesby a large margin.8AcknowledgmentsXinghao Zhu is supported by the UCB-FANUC Fellowship. This work was in part supported by astartup grant from the National University of Singapore.References[1] X. Zhu, R. Tian, C. Xu, M. Huo, W. Zhan, M. Tomizuka, and M. Ding. Fanuc manipulation:A dataset for learning-based manipulation with fanuc mate 200id robot. https://sites.google.com/berkeley.edu/fanuc-manipulation , 2023.[2] L. Sun, H. Zhang, W. Xu, and M. Tomizuka. Efficient multi-task and transfer reinforcementlearning with parameter-compositional framework. IEEE Robotics and Automation Letters , 8(8):4569–4576, 2023.[3] L. Sun, H. Zhang, W. Xu, and M. Tomizuka. Paco: Parameter-compositional multi-task rein-forcement learning. In NeurIPS , 2022.[4] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, J. Peters, et al. An algorithmicperspective on imitation learning. Foundations and Trends® in Robotics , 7(1-2):1–179, 2018.[5] S. Jin, X. Zhu, C. Wang, and M. Tomizuka. Contact pose identification for peg-in-hole assem-bly under uncertainties. In 2021 American Control Conference (ACC) , pages 48–53. IEEE,2021.[6] C. Wang, Y . Zhang, X. Zhang, Z. Wu, X. Zhu, S. Jin, T. Tang, and M. Tomizuka. Offline-online learning of deformation model for cable manipulation with graph neural networks. IEEERobotics and Automation Letters , 7(2):5544–5551, 2022.[7] M. Huo, M. Ding, C. Xu, T. Tian, X. Zhu, Y . Mu, L. Sun, M. Tomizuka, and W. Zhan. Human-oriented representation learning for robotic manipulation. arXiv preprint arXiv:2310.03023 ,2023.[8] X. Zhang, S. Jin, C. Wang, X. Zhu, and M. Tomizuka. Learning insertion primitives withdiscrete-continuous hybrid action space for robotic assembly tasks. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 9881–9887. IEEE, 2022.[9] X. Zhang, C. Wang, L. Sun, Z. Wu, X. Zhu, and M. Tomizuka. Efficient sim-to-real transferof contact-rich manipulation skills with online admittance residual learning. In 7th AnnualConference on Robot Learning , 2023.[10] R. Goyal, S. E. Kahou, V . Michalski, J. Materzynska, S. Westphal, H. Kim, V . Haenel,I. Fr ̈und, P. Yianilos, M. Mueller-Freitag, F. Hoppe, C. Thurau, I. Bax, and R. Memise-vic. The ”something something” video database for learning and evaluating visual com-mon sense. CoRR , abs/1706.04261, 2017. URL https://20bn.com/datasets/something-something .[11] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Doll ́ar, and R. Girshick. Segment anything. arXiv:2304.02643 , 2023.[12] D. Vicini, S. Speierer, and W. Jakob. Differentiable signed distance function rendering.Transactions on Graphics (Proceedings of SIGGRAPH) , 41(4):125:1–125:18, July 2022. doi:10.1145/3528223.3530139.[13] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface con-struction algorithm. SIGGRAPH ’87, page 163–169, New York, NY , USA, 1987. Asso-ciation for Computing Machinery. ISBN 0897912276. doi:10.1145/37401.37422. URLhttps://doi.org/10.1145/37401.37422 .9[14] R. Liu, R. Wu, B. V . Hoorick, P. Tokmakov, S. Zakharov, and C. V ondrick. Zero-1-to-3: Zero-shot one image to 3d object. arXiv:2303.11328 , 2023.[15] H. Qi, A. Kumar, R. Calandra, Y . Ma, and J. Malik. In-Hand Object Rotation via Rapid MotorAdaptation. In Conference on Robot Learning (CoRL) , 2022.[16] nimble. Nimble physics documentation. https://nimblephysics.org/docs .[17] T. Pang, H. J. T. Suh, L. Yang, and R. Tedrake. Global planning for contact-rich manipulationvia local smoothing of quasi-dynamic contact models, 2022.[18] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In G. Gordon, D. Dunson, and M. Dud ́ık, editors, Proceedings ofthe Fourteenth International Conference on Artificial Intelligence and Statistics , volume 15 ofProceedings of Machine Learning Research , pages 627–635, Fort Lauderdale, FL, USA, 11–13Apr 2011. PMLR. URL https://proceedings.mlr.press/v15/ross11a.html .[19] R. Antonova, J. Yang, K. M. Jatavallabhula, and J. Bohg. Rethinking optimization with differ-entiable simulation from a global perspective. In 6th Annual Conference on Robot Learning ,2022.[20] X. Zhu, W. Lian, B. Yuan, C. D. Freeman, and M. Tomizuka. Allowing safe contact in roboticgoal-reaching: Planning and tracking in operational and null spaces. IEEE International Con-ference on Robotics and Automation (ICRA), 2023.[21] P. Hart, N. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimumcost paths. IEEE Transactions on Systems Science and Cybernetics , 4(2):100–107, 1968. doi:10.1109/tssc.1968.300136. URL https://doi.org/10.1109/tssc.1968.300136 .[22] R. J. Williams. Simple statistical gradient-following algorithms for connectionist rein-forcement learning. Mach. Learn. , 8(3–4):229–256, may 1992. ISSN 0885-6125. doi:10.1007/BF00992696.[23] V . Petr ́ık, M. Tapaswi, I. Laptev, and J. Sivic. Learning object manipulation skills via approx-imate state estimation from real videos. In Conference on Robot Learning , pages 296–312.PMLR, 2021.[24] J. L. Sch ̈onberger and J.-M. Frahm. Structure-from-motion revisited. In Conference on Com-puter Vision and Pattern Recognition (CVPR) , 2016.[25] W. Bian, Z. Wang, K. Li, J.-W. Bian, and V . A. Prisacariu. Nope-nerf: Optimising neuralradiance field with no pose prior. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 4160–4169, 2023.[26] C. Chen, P. Culbertson, M. Lepert, M. Schwager, and J. Bohg. Trajectotree: Trajectoryoptimization meets tree search for planning multi-contact dexterous manipulation. In 2021IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages 8262–8268, 2021. doi:10.1109/IROS51168.2021.9636346.[27] S. Dasari, A. Gupta, and V . Kumar. Learning dexterous manipulation from exemplar object tra-jectories and pre-grasps. In 2023 IEEE International Conference on Robotics and Automation(ICRA) , pages 3889–3896. IEEE, 2023.[28] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning fromdemonstration. Robotics and autonomous systems , 57(5):469–483, 2009.[29] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inversereinforcement learning from visual demonstrations. In Conference on Robot Learning , pages1930–1942. PMLR, 2021.10[30] A. Singh, L. Yang, K. Hartikainen, C. Finn, and S. Levine. End-to-end robotic reinforcementlearning without reward engineering. arXiv preprint arXiv:1904.07854 , 2019.[31] L. Smith, N. Dhawan, M. Zhang, P. Abbeel, and S. Levine. Avid: Learning multi-stage tasksvia pixel-level translation of human videos, 2020.[32] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model based inversereinforcement learning from visual demonstration. In Conference on Robot Learning (CoRL) ,2020.[33] L. Shao, T. Migimatsu, Q. Zhang, K. Yang, and J. Bohg. Concept2robot: Learning manip-ulation concepts from instructions and human demonstrations. The International Journal ofRobotics Research , 40(12-14):1419–1434, 2021.[34] Y . Qin, Y .-H. Wu, S. Liu, H. Jiang, R. Yang, Y . Fu, and X. Wang. Dexmv: Imitation learning fordexterous manipulation from human videos. In Computer Vision–ECCV 2022: 17th EuropeanConference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIX , pages 570–587.Springer, 2022.[35] K. Shaw, S. Bahl, and D. Pathak. Videodex: Learning dexterity from internet videos. InConference on Robot Learning , pages 654–665. PMLR, 2023.[36] X. Zhu, L. Sun, Y . Fan, and M. Tomizuka. 6-dof contrastive grasp proposal network. In 2021IEEE International Conference on Robotics and Automation (ICRA) , pages 6371–6377. IEEE,2021.[37] M. Yang and O. Nachum. Representation matters: offline pretraining for sequential decisionmaking. In International Conference on Machine Learning , pages 11784–11794. PMLR, 2021.[38] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latentimagination. arXiv preprint arXiv:1912.01603 , 2019.[39] X. Zhu, Y . Zhou, Y . Fan, L. Sun, J. Chen, and M. Tomizuka. Learn to grasp with less su-pervision: A data-efficient maximum likelihood grasp sampling loss. In 2022 InternationalConference on Robotics and Automation (ICRA) , pages 721–727. IEEE, 2022.[40] D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latentdynamics for planning from pixels. In International conference on machine learning , pages2555–2565. PMLR, 2019.[41] D. Hafner, T. Lillicrap, M. Norouzi, and J. Ba. Mastering atari with discrete world models.arXiv preprint arXiv:2010.02193 , 2020.[42] X. Zhu, S. Jain, M. Tomizuka, and J. Van Baar. Learning to synthesize volumetric meshes fromvision-based tactile imprints. In 2022 International Conference on Robotics and Automation(ICRA) , pages 4833–4839. IEEE, 2022.[43] F. de Avila Belbute-Peres, K. Smith, K. Allen, J. Tenenbaum, and J. Z. Kolter.End-to-end differentiable physics for learning and control. In S. Bengio, H. Wal-lach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems , volume 31. Curran Associates,Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/842424a1d0595b76ec4fa03c46e8d755-Paper.pdf .[44] J. Degrave, M. Hermans, J. Dambre, et al. A differentiable physics engine for deep learning inrobotics. Frontiers in neurorobotics , page 6, 2019.[45] Y . Hu, T.-M. Li, L. Anderson, J. Ragan-Kelley, and F. Durand. Taichi: a language for high-performance computation on spatially sparse data structures. ACM Transactions on Graphics(TOG) , 38(6):201, 2019.11[46] K. M. Jatavallabhula, M. Macklin, F. Golemo, V . V oleti, L. Petrini, M. Weiss, B. Considine,J. Parent-L ́evesque, K. Xie, K. Erleben, et al. gradsim: Differentiable simulation for systemidentification and visuomotor control. arXiv preprint arXiv:2104.02646 , 2021.[47] M. Geilinger, D. Hahn, J. Zehnder, M. B ̈acher, B. Thomaszewski, and S. Coros. Add: analyt-ically differentiable dynamics for multi-body systems with frictional contact. ACM Transac-tions on Graphics (TOG) , 39(6):1–15, 2020.[48] T. Du, K. Wu, P. Ma, S. Wah, A. Spielberg, D. Rus, and W. Matusik. Diffpd: Differentiableprojective dynamics. ACM Trans. Graph. , 41(2), nov 2021. ISSN 0730-0301. doi:10.1145/3490168. URL https://doi.org/10.1145/3490168 .[49] J. Liang, M. Lin, and V . Koltun. Differentiable cloth simulation for inverse problems.In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch ́e-Buc, E. Fox, and R. Garnett,editors, Advances in Neural Information Processing Systems , volume 32. Curran Asso-ciates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/28f0b864598a1291557bed248a998d4e-Paper.pdf .[50] Y .-L. Qiao, J. Liang, V . Koltun, and M. C. Lin. Scalable differentiable physics for learning andcontrol. arXiv preprint arXiv:2007.02168 , 2020.[51] Y . Li, T. Du, K. Wu, J. Xu, and W. Matusik. Diffcloth: Differentiable cloth simulation withdry frictional contact. ACM Transactions on Graphics (TOG) , 42(1):1–20, 2022.[52] K. Werling, D. Omens, J. Lee, I. Exarchos, and C. K. Liu. Fast and feature-complete differ-entiable physics for articulated rigid bodies with contact. In Proceedings of Robotics: Scienceand Systems (RSS) , July 2021.[53] S. Ha, S. Coros, A. Alspach, J. Kim, and K. Yamane. Joint optimization of robot design andmotion parameters using the implicit function theorem. In S. Srinivasa, N. Ayanian, N. Amato,and S. Kuindersma, editors, Robotics , Robotics: Science and Systems, United States, 2017.MIT Press Journals. doi:10.15607/rss.2017.xiii.003. Publisher Copyright: © 2017 MIT PressJournals. All rights reserved.; 2017 Robotics: Science and Systems, RSS 2017 ; Conferencedate: 12-07-2017 Through 16-07-2017.[54] Y .-L. Qiao, J. Liang, V . Koltun, and M. C. Lin. Efficient differentiable simulation of articulatedbodies. In International Conference on Machine Learning , pages 8661–8671. PMLR, 2021.[55] K. Um, R. Brand, Y . R. Fei, P. Holl, and N. Thuerey. Solver-in-the-loop: Learning fromdifferentiable physics to interact with iterative pde-solvers. Advances in Neural InformationProcessing Systems , 33:6111–6122, 2020.[56] N. Wandel, M. Weinmann, and R. Klein. Learning incompressible fluid dynamicsfrom scratch–towards fast, differentiable fluid models that generalize. arXiv preprintarXiv:2006.08762 , 2020.[57] P. Holl, V . Koltun, and N. Thuerey. Learning to control pdes with differentiable physics. arXivpreprint arXiv:2001.07457 , 2020.[58] T. Takahashi, J. Liang, Y .-L. Qiao, and M. C. Lin. Differentiable fluids with solid couplingfor learning and control. Proceedings of the AAAI Conference on Artificial Intelligence , 35(7):6138–6146, May 2021. doi:10.1609/aaai.v35i7.16764. URL https://ojs.aaai.org/index.php/AAAI/article/view/16764 .[59] H. Kato, D. Beker, M. Morariu, T. Ando, T. Matsuoka, W. Kehl, and A. Gaidon. Differentiablerendering: A survey. arXiv preprint arXiv:2006.12057 , 2020.[60] S. Zhao, W. Jakob, and T.-M. Li. Physics-based differentiable rendering: from theory to im-plementation. In ACM siggraph 2020 courses , pages 1–30. 2020.12[61] J. Lv, Q. Yu, L. Shao, W. Liu, W. Xu, and C. Lu. Sagci-system: Towards sample-efficient,generalizable, compositional, and incremental robot learning. In 2022 IEEE InternationalConference on Robotics and Automation (ICRA) . IEEE, 2022.[62] X. Lin, Z. Huang, Y . Li, J. B. Tenenbaum, D. Held, and C. Gan. Diffskill: Skill abstractionfrom differentiable physics for deformable object manipulations with tools. 2022.[63] D. Turpin, L. Wang, E. Heiden, Y .-C. Chen, M. Macklin, S. Tsogkas, S. Dickinson, andA. Garg. Grasp’d: Differentiable contact-rich grasp synthesis for multi-fingered hands. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part VI , pages 201–221. Springer, 2022.[64] X. Lin, C. Qi, Y . Zhang, Z. Huang, K. Fragkiadaki, Y . Li, C. Gan, and D. Held. Planningwith spatial-temporal abstraction from point clouds for deformable object manipulation. In6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=tyxyBj2w4vw .[65] K. M. Jatavallabhula, M. Macklin, F. Golemo, V . V oleti, L. Petrini, M. Weiss, B. Con-sidine, J. Parent-Levesque, K. Xie, K. Erleben, L. Paull, F. Shkurti, D. Nowrouzezahrai,and S. Fidler. gradsim: Differentiable simulation for system identification and visuomo-tor control. International Conference on Learning Representations (ICLR) , 2021. URLhttps://openreview.net/forum?id=c_E8kFWfhp0 .[66] P. Sundaresan, R. Antonova, and J. Bohg. Diffcloud: Real-to-sim from point clouds with dif-ferentiable simulation and rendering of deformable objects. arXiv preprint arXiv:2204.03139 ,2022.[67] J. Zhang, Z. Wan, and J. Liao. Adaptive joint optimization for 3d reconstruction with differen-tiable rendering. IEEE Transactions on Visualization and Computer Graphics , 2022.[68] J. Huang, J. Thies, A. Dai, A. Kundu, C. Jiang, L. J. Guibas, M. Nießner, T. Funkhouser,et al. Adversarial texture optimization from rgb-d scans. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 1559–1568, 2020.[69] J. Lv, Y . Feng, C. Zhang, S. Zhao, L. Shao, and C. Lu. Sam-rl: Sensing-aware model-based re-inforcement learning via differentiable physics-based simulation and rendering. arXiv preprintarXiv:2210.15185 , 2022.[70] T.-M. Li, M. Aittala, F. Durand, and J. Lehtinen. Differentiable monte carlo ray tracing throughedge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia) , 37(6):222:1–222:11, 2018.[71] H. J. Suh, M. Simchowitz, K. Zhang, and R. Tedrake. Do differentiable simulators give betterpolicy gradients? In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato,editors, Proceedings of the 39th International Conference on Machine Learning , volume 162ofProceedings of Machine Learning Research , pages 20668–20696. PMLR, 17–23 Jul 2022.[72] Y . Fan, X. Zhu, and M. Tomizuka. Optimization model for planning precision grasps withmulti-fingered hands. In 2019 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 1548–1554. IEEE, 2019.[73] I. Mordatch, Z. Popovi ́c, and E. Todorov. Contact-invariant optimization for hand manipula-tion. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Anima-tion, SCA ’12, page 137–144. Eurographics Association, 2012. ISBN 9783905674378.[74] T. Marcucci, M. Gabiccini, and A. Artoni. A two-stage trajectory optimization strategy forarticulated bodies with unscheduled contact sequences. IEEE Robotics and Automation Letters ,2(1):104–111, 2017. doi:10.1109/LRA.2016.2547024.13[75] X. Cheng, E. Huang, Y . Hou, and M. T. Mason. Contact mode guided sampling-based plan-ning for quasistatic dexterous manipulation in 2d. In 2021 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 6520–6526, 2021. doi:10.1109/ICRA48506.2021.9560766.[76] C. Wang, H.-C. Lin, S. Jin, X. Zhu, L. Sun, and M. Tomizuka. Bpomp: A bilevel path op-timization formulation for motion planning. In 2022 American Control Conference (ACC) ,pages 1891–1897. IEEE, 2022.14 |
0I3su3mkuL | Q-Transformer: Scalable Offline ReinforcementLearning via Autoregressive Q-FunctionsYevgen Chebotar, Quan Vuong, Alex Irpan, Karol Hausman, Fei Xia, Yao Lu, Aviral Kumar,Tianhe Yu, Alexander Herzog, Karl Pertsch, Keerthana Gopalakrishnan, Julian Ibarz, Ofir Nachum,Sumedh Sontakke, Grecia Salazar, Huong T Tran, Jodilyn Peralta, Clayton Tan, Deeksha Manjunath,Jaspiar Singht, Brianna Zitkovich, Tomas Jackson, Kanishka Rao, Chelsea Finn, Sergey LevineGoogle DeepMindAbstract: In this work, we present a scalable reinforcement learning method fortraining multi-task policies from large offline datasets that can leverage both hu-man demonstrations and autonomously collected data. Our method uses a Trans-former to provide a scalable representation for Q-functions trained via offline tem-poral difference backups. We therefore refer to the method as Q-Transformer.By discretizing each action dimension and representing the Q-value of each ac-tion dimension as separate tokens, we can apply effective high-capacity sequencemodeling techniques for Q-learning. We present several design decisions that en-able good performance with offline RL training, and show that Q-Transformeroutperforms prior offline RL algorithms and imitation learning techniques on alarge diverse real-world robotic manipulation task suite. The project’s websiteand videos can be found at qtransformer.github.io1 IntroductionHuman demonstrationsAutonomousdataConservative regularizationAutoregressive Q-learningMonte-Carlo returnsMixed quality dataenvironment stepaction dimension......Q-values per action dimensionQ-TransformerFigure 1: Q-Transformer enables training high-capacity sequential architectures on mixed qual-ity data. Our policies are able to improve uponhuman demonstrations and execute a variety ofmanipulation tasks in the real world.Robotic learning methods that incorporate largeand diverse datasets in combination with high-capacity expressive models, such as Transform-ers [1, 2, 3, 4, 5, 6], have the potential to acquiregeneralizable and broadly applicable policies thatperform well on a wide variety of tasks [1, 2].For example, these policies can follow naturallanguage instructions [4, 7], perform multi-stagebehaviors [8, 9], and generalize broadly acrossenvironments, objects, and even robot morpholo-gies [10, 3]. However, many of the recently pro-posed high-capacity models in the robotic learn-ing literature are trained with supervised learn-ing methods. As such, the performance of the re-sulting policy is limited by the degree to whichhuman demonstrators can provide high-qualitydemonstration data. This is limiting for two rea-sons. First, we would like robotic systems thataremore proficient than human teleoperators, ex-ploiting the full potential of the hardware to per-form tasks quickly, fluently, and reliably. Second,we would like robotic systems that get better withautonomously gathered experience, rather thanrelying entirely on high-quality demonstrations.Reinforcement learning in principle providesboth of these capabilities. A number of promising recent advances demonstrate the successes oflarge-scale robotic RL in varied settings, such as robotic grasping and stacking [11, 12], learningheterogeneous tasks with human-specified rewards [13], learning multi-task policies [14, 15], learn-ing goal-conditioned policies [16, 17, 18, 19], and robotic navigation [20, 21, 22, 23, 24]. However,Equal contribution.Corresponding emails: chebotar@google.com, quanhovuong@google.com .7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.training high-capacity models such as Transformers using RL algorithms has proven more difficultto instantiate effectively at large scale. In this paper, we aim to combine large-scale robotic learningfrom diverse real-world datasets with modern high-capacity Transformer-based policy architectures.While in principle simply replacing existing architectures (e.g., ResNets [15] or smaller convo-lutional neural networks [11, 14]) with a Transformer is conceptually straightforward, devising amethodology that effectively makes use of such architectures is considerably more challenging.High-capacity models only make sense when we train on large and diverse datasets – small, narrowdatasets simply do not require this much capacity and do not benefit from it. While prior worksused simulation to create such datasets [2, 25, 26], the most representative data comes from the realworld [12, 11, 14]. Therefore, we focus on reinforcement learning methods that can use Transform-ers and incorporate large, previously collected datasets via offline RL. Offline RL methods train onprior data, aiming to derive the most effective possible policy from a given dataset. Of course, thisdataset can be augmented with additionally autonomously gathered data, but the training is separatedfrom data collection, providing an appealing workflow for large-scale robotics applications [27].Another issue in applying Transformer models to RL is to design RL systems that can effectivelytrain such models. Effective offline RL methods generally employ Q-function estimation via tem-poral difference updates [28]. Since Transformers model discrete token sequences, we convert theQ-function estimation problem into a discrete token sequence modeling problem, and devise a suit-able loss function for each token in the sequence. Na ̈ıvely discretizing the action space leads to ex-ponential blowup in action cardinality, so we employ a per-dimension discretization scheme, whereeach dimension of the action space is treated as a separate time step for RL. Different bins in the dis-cretization corresponds to distinct actions. The per-dimension discretization scheme allows us to usesimple discrete-action Q-learning methods with a conservative regularizer to handle distributionalshift [29, 30]. We propose a specific regularizer that minimizes values of every action that was nottaken in the dataset and show that our method can learn from both narrow demonstration-like dataand broader data with exploration noise. Finally, we utilize a hybrid update that combines MonteCarlo andn-step returns with temporal difference backups [31], and show that doing so improves theperformance of our Transformer-based offline RL method on large-scale robotic learning problems.In summary, our main contribution is the Q-Transformer, a Transformer-based architecture forrobotic offline reinforcement learning that makes use of per-dimension tokenization of Q-valuesand can readily be applied to large and diverse robotic datasets, including real-world data. Wesummarize the components of Q-Transformer in Figure 1. Our experimental evaluation validatesthe Q-Transformer by learning large-scale text-conditioned multi-task policies, both in simulationfor rigorous comparisons and in large-scale real-world experiments for realistic validation. Ourreal-world experiments utilize a dataset with 38,000 successful demonstrations and 20,000 failedautonomously collected episodes on more than 700 tasks, gathered with a fleet of 13 robots. Q-Transformer outperforms previously proposed architectures for large-scale robotic RL [15, 14], aswell as previously proposed Transformer-based models such as the Decision Transformer [32, 33].2 Related WorkOffline RL has been extensively studied in recent works [34, 35, 36, 37, 35, 38, 39, 40, 41, 42,43, 44, 45, 46, 47, 48, 39]. Conservative Q-learning (CQL) [29] learns policies constrained toa conservative lower bound of the value function. Our goal is not to develop a new algorithmicprinciple for offline RL, but to devise an offline RL system that can integrate with high-capacityTransformers, and scale to real-world multi-task robotic learning. We thus develop a version of CQLparticularly effective for training large Transformer-based Q-functions on mixed quality data. Whilesome works have noted that imitation learning outperforms offline RL on demonstration data [49],other works showed offline RL techniques to be effective with demonstrations both in theory and inpractice [50, 15]. Nonetheless, a setting that combines “narrow” demonstration data with “broad”sub-optimal (e.g., autonomously collected) data is known to be particularly difficult [51, 52, 53],though it is quite natural in many robotic learning settings where we might want to augment acore set of demonstrations with relatively inexpensive low-quality autonomously collected data. Webelieve that the effectiveness of our method in this setting is of particular interest to practitioners.Transformer-based architectures [54] have been explored in recent robotics research, both to learngeneralizable task spaces [55, 56, 57, 58, 8, 59] and to learn multi-task or even multi-domain se-quential policies directly [2, 1, 6, 3]. Although most of these works considered Transformers in asupervised learning setting, e.g., learning from demonstrations [4, 5], there are works on employ-2Figure 2: Q-values update for each action dimension at timestep t.Given a history of states, weupdate the Q-values of all bins in all action dimensions. The Q-values of the discrete action bins ofthedataset actions are trained via the Bellman update (green boxes). The values of action bins notobserved in the dataset are minimized towards zero (red boxes). The Q-targets of all action dimen-sions except the last one are computed using maximization over the next action dimension withinthe same time step. The Q-target of the last action dimension is computed using the discounted max-imization of the first dimension of the next time step plus the reward. We also incorporate MonteCarlo returns by taking the maximum of the computed Q-targets and the return-to-go.ing Transformers for RL and conditional imitation learning [32, 20, 60, 33]. In our experiments, wecompare to Decision Transformer (DT) in particular [32], which extends conditional imitation learn-ing with reward conditioning [61, 62] to use sequence models, and structurally resembles imitationlearning methods that have been used successfully for robotic control. Although DT incorporateselements of RL (namely, reward functions), it does not provide a mechanism to improve over thedemonstrated behavior or recombine parts of the dataset to synthesize more optimal behaviors, andindeed is known to have theoretical limitations [63]. On the other hand, such imitation-based recipesare popular perhaps due to the difficulty of integrating Transformer architectures with more powerfultemporal difference methods (e.g., Q-learning). We show that several simple but important designdecisions are needed to make this work, and our method significantly outperforms non-TD methodssuch as DT, as well as imitation learning, on our large-scale multi-task robotic control evaluation.Extending Decision Transformer, Yamagata et al. [64] proposed to use a Q-function in combinationwith a Transformer-based policy, but the Q-function itself did not use a Transformer-based architec-ture. Our Q-function could in principle be combined with this method, but our focus is specificallyon directly training Transformers to represent Q-values.To develop a Transformer-based Q-learning method, we discretize each action space dimension, witheach dimension acting as a distinct time step. Autoregressive generation of discrete actions has beenexplored by Metz et al. [65], who propose a hierarchical decomposition of an MDP and then utilizeLSTM [66] for autoregressive discretization. Our discretization scheme is similar but simpler, inthat we do not use any hierarchical decomposition but simply treat each dimension as a time step.However, since our goal is to perform offline RL at scale with real-world image based tasks (vs.the smaller state-space tasks learned via online RL by Metz et al. [65]), we present a number ofadditional design decisions to impose a conservative regularizer, enabling training our Transformer-based offline Q-learning method at scale, providing a complete robotic learning system.3 BackgroundIn RL, we learn policies that maximizes the expected total reward in a Markov decision process(MDP) with states s, actionsa, discount factor 2(0;1], transition function T(s0js;a)and areward function R(s;a). Actionsahave dimensionality dA. Value-based RL approaches learn aQ-functionQ(s;a)representing the total discounted returnPttR(st;at), with policy (ajs) =arg maxaQ(s;a). The Q-function can be learned by iteratively applying the Bellman operator [67]:BQ(st;at) =R(st;at) +maxat+1Q(st+1;at+1);approximated via function approximation and sampling. The offline RL setting assumes access to anoffline dataset of transitions or episodes, produced by some unknown behavior policy (ajs), butdoes not assume the ability to perform additional online interaction during training. This is appealing3FiLM EfficientNet+TransformerPositional encodingUniversal Sentence EncoderSelf-Attention Layers (8x)Camera imagesLanguage instructionPick sponge...Q-values for each action bin One-hot actionFeed previously predicted action dimensionsconcatAction embeddingFigure 3: Q-Transformer network architecture , as applied to our multi-task language-conditionedrobotic control setting. The encoding of the observations is concatenated with embeddings of theprevious predicted action dimensions and processed by Transformer layers. We apply a sigmoidto the Transformer output to produce Q-values (normalized to lie in the range [0;1]) for each ofthe action value bins. Finally, one-hot action vectors are constructed by taking the arg max overall bins and are fed back to the network to predict the Q-values of the next action dimensions.The language instruction is encoded with Universal Sentence Encoder [68] and then fed to FiLMEfficientNet [69, 70] network together with the robot camera images.for real-world robotic learning, where on-policy data collection is time-consuming. Learning fromoffline datasets requires addressing distributional shift, since in general the action that maximizesQ(st+1;at+1)might lie outside of the data distribution. One approach to mitigate this is to add aconservative penalty [29, 52] that pushes down the Q-values Q(s;a)for any action aoutside of thedataset, thus ensuring that the maximum value action is in-distribution.In this work, we consider tasks with sparse rewards, where a binary reward R2f0;1g(indicatingsuccess or failure) is assigned at the last time step of episodes. Although our method is not specificto this setting, such reward structure is common in robotic manipulation tasks that either succeed orfail on each episode, and can be particularly challenging for RL due to the lack of reward shaping.4 Q-TransformerIn this section, we introduce Q-Transformer, an architecture for offline Q-learning with Transformermodels, which is based on three main ingredients. First, we describe how we apply discretizationand autoregression to enable TD-learning with Transformer architectures. Next, we introduce aparticular conservative Q-function regularizer that enables learning from offline datasets. Lastly, weshow how Monte Carlo and n-step returns can be used to improve learning efficiency.4.1 Autoregressive Discrete Q-LearningUsing Transformers with Q-learning presents two challenges: (1)we must tokenize the inputs to ef-fectively apply attention mechanisms, which requires discretizing the action space; (2)we must per-form maximization of Q-values over discretized actions while avoiding the curse of dimensionality.Addressing these issues within the standard Q-learning framework requires new modeling decisions.The intuition behind our autoregressive Q-learning update is to treat each action dimension as essen-tially a separate time step. That way, we can discretize individual dimensions (1D quantities), ratherthan the entire action space, avoiding the curse of dimensionality. This can be viewed as a simplifiedversion of the scheme proposed in [65], though we apply this to high-capacity Transformer models,extend it to the offline RL setting, and scale it up to real-world robotic learning.Let= (s1;a1;:::;sT;aT)be a trajectory of robotic experience of length Tfrom an offlinedatasetD. For a given time-step t, and the corresponding action atin the trajectory, we define aper-dimension view of the action at. Leta1:itdenote the vector of action dimensions from the firstdimensiona1tuntil thei-th dimension ait, whereican range from 1to the total number of actiondimensions, that we denote as dA. Then, for a time window wof state history, we define the Q-valueof the action aitin theithdimension using an autoregressive Q-function conditioned on states fromthis time window stw:tand previous action dimensions for the current time step a1:i1t . To trainthe Q-function, we define a per-dimension Bellman update. For all dimensions i2f1;:::;dAg:Q(stw:t;a1:i1t;ait) 8><>:maxai+1tQ(stw:t;a1:it;ai+1t) ifi2f1;:::;dA1gR(st;at) +maxa1t+1Q(stw+1:t+1;a1t+1)ifi=dA(1)4The reward is only applied on the last dimension (second line in the equation), as we do not receiveany reward before executing the whole action. In addition, we only discount Q-values between thetime steps and keep discounting at 1:0for all but the last dimension within each time step, to ensurethe same discounting as in the original MDP. Figure 2 illustrates this process, where each yellow boxrepresents the Q-target computation with additional conservatism and Monte Carlo returns describedin the next subsections. It should be noted that by treating each action dimension as a time step forthe Bellman update, we do not change the general optimization properties of Q-learning algorithmsand the principle of the Bellman optimality still holds for a given MDP as we maximize over anaction dimension given the optimality of all action dimensions in the future. We show that thisapproach provides a theoretically consistent way to optimize the original MDP in Appendix A, witha proof of convergence in the tabular setting in Appendix B.4.2 Conservative Q-Learning with TransformersHaving defined a Bellman backup for running Q-learning with Transformers, we now develop a tech-nique that enables learning from offline data, including human demonstrations and autonomouslycollected data. This typically requires addressing over-estimation due to the distributional shift,when the Q-function for the target value is queried at an action that differs from the one on which itwas trained. Conservative Q-learning (CQL) [29] minimizes the Q-function on out-of-distributionactions, which can result in Q-values that are significantly smaller than the minimal possible cumu-lative reward that can be attained in any trajectory. When dealing with sparse rewards R2f0;1g,results in [27] show that the Q-function regularized with a standard conservative objective can takeon negative values, even though instantaneous rewards are all non-negative. This section presents amodified version of conservative Q-learning that addresses this issue in our problem setting.The key insight behind our design is that, rather than minimizing the Q-values on actions not inthe data, we can instead regularize these Q-values to be close to the minimal attainable possiblecumulative reward. Concretely, denoting the minimal possible reward on the task as Rmin, and thetime horizon of the task as T, our approach regularizes the Q-values on actions not covered by thedataset towards RminT, which in our problem setting is equal to 0(i.e.,Rmin= 0). For simplicity ofnotation, we omit the action dimension indices in presenting the resulting objective, but remark thatthe training objective below is applied to Bellman backups on all action dimensions as describedin the previous section. Let be the behavioral policy that induced a given dataset D, and let~(ajs) =1Z(s)(1:0(ajs))be the distribution over all actions which have a very low densityunder(ajs). Our objective to train the Q-function is:J=12EsD;a(ajs)hQ(s;a)BQk(s;a)2i| {z }(i);TD error+12EsD;a~(ajs)(Q(s;a)0)2| {z }(ii);conservative regularization LC;(2)where the first term (i)trains the Q-function by minimizing the temporal difference error objectiveas defined in Eq. 1, and the second term (ii)regularizes the Q-values to the minimal possible Q-value of 0in expectation under the distribution of actions induced by ~, which we denote as aconservative regularization term LC. Term (ii)is also weighted by a multiplier , which modulatesthe strength of this conservative regularization. We discuss the choice of in our implementation inAppendix D.2 and analyze the behavior of the conservatism term in Appendix C, providing a simplecharacterization of how this regularizer modifies the learned Q-function in tabular settings.4.3 Improving Learning Efficiency with Monte Carlo and n-step ReturnsWhen the dataset contains some good trajectories (e.g., demonstrations) and some suboptimal trajec-tories (e.g., autonomously collected trials), utilizing Monte Carlo return-to-go estimates to accelerateQ-learning can lead to significant performance improvements, as the Monte Carlo estimates alongthe better trajectories lead to much faster value propagation. This has also been observed in priorwork [31]. Based on this observation, we propose a simple improvement to Q-Transformer that wefound to be quite effective in practice. The Monte Carlo return is defined by the cumulative rewardwithin the offline trajectory : MCt:T=PTj=tjtR(sj;aj). This matches the Q-value of the be-havior policy , and since the optimal Q(s;a)is larger than the Q-value for any other policy, wehaveQ(st;at)MCt:T. Since the Monte Carlo return is a lower bound of the optimal Q-function,we can augment the Bellman update to take the maximum between the MC-return and the currentQ-value: max ( MCt:T;Q(st;at)), without changing what the Bellman update will converge to.5Task category # of tasks Q-T DT IQL RT-1drawer pick and place 18 64% 49% 11 % 17%open and close drawer 7 33% 11% 11% 0%move object near target 47 71% 40% 60% 58%Average success rate 56% 33% 27% 25%Figure 4: Left: Real world manipulation tasks. Right: Real world performance comparison.RT-1 [1] is imitation learning on demonstrations. Q-Transformer (Q-T), Decision Transformer(DT) [32], Implicit Q-learning (IQL) [40] learn from both demonstrations and autonomous data.Although this does not change convergence, including this maximization speeds up learning (seeSection 5.3). We present a hypothesis why this occurs. In practice, Q-values for final timesteps(sT;aT)are learned first and then propagated backwards in future gradient steps. It can take multiplegradients for the Q-value to propagate all the way to (s1;a1). The max( MC;Q)allows us to applyuseful gradients to Q(s1;a1)at the start of training before the Q-values have propagated.In our experiments, we also notice that additionally employing n-step returns [71, 72] over actiondimensions can significantly help with the learning speed. We pick nsuch that the final Q-value ofthe last dimension of the next time step is used as the Q-target. This is because we get a new stateand reward only after inferring and executing the whole action as opposed to parts of it, meaningthat intermediate rewards remain 0 all the way until the last action dimension. While this introducesbias to the Bellman backups, as is always the case with off-policy learning with n-step returns, wefind in our ablation study in Section 5.3 that the detrimental effects of this bias are small, while thespeedup in training is significant. This is consistent with previously reported results [72]. Moredetails about our Transformer sequence model architecture (depicted in Figure 3) conservative Q-learning implementation, and the robot system can be found in Appendix D.5 ExperimentsIn our experiments, we aim to answer the following questions: (1) Can Q-Transformer learn froma combination of demonstrations and sub-optimal data? (2) How does Q-Transformer compare toother methods? (3) How important are the specific design choices in Q-Transformer? (4) Can Q-Transformer be applied to large-scale real world robotic manipulation problems?5.1 Real-world language-conditioned manipulation evaluationTraining dataset. The offline data used in our experiments was collected with a fleet of 13 robots,and consists of a subset of the demonstration data described by Brohan et al. [1], combined withlower quality autonomously collected data. The demonstrations were collected via human teleoper-ation for over 700distinct tasks, each with a separate language description. We use a maximum of100 demonstrations per task, for a total of about 38,000 demonstrations. All of these demonstrationssucceed on their respective tasks and receive a reward of 1.0. The rest of the dataset was collectedby running the robots autonomously, executing policies learned via behavioral cloning.To ensure a fair comparison between Q-Transformer and imitation learning methods, we discard allsuccessful episodes in the autonomously collected data when we train our method, to ensure thatby including the autonomous data the Q-Transformer does not get to observe more successful trialsthan the imitation learning baselines. This leaves us with about 20,000 additional autonomouslycollected failed episodes, each with a reward of 0.0, for a dataset size of about 58,000 episodes. Theepisodes are on average 35time steps in length. Examples of the tasks are shown in Figure 4.Performance evaluation. To evaluate how well Q-Transformer can perform when learning fromreal-world offline datasets while effectively incorporating autonomously collected failed episodes,we evaluate Q-Transformer on 72unique manipulation tasks, and a variety of different skills, suchas “drawer pick and place”, “open and close drawer”, “move object near target”, each consistingof18,7and48unique tasks instructions respectively to specify different object combinations anddrawers. As such, the average success rate in Table 4 is the average over 72tasks.Since each task in the training set only has a maximum of 100demonstrations, we observe fromFigure 4 that an imitation learning algorithm like RT-1 [1], which also uses a similar Transformerarchitecture, struggles to obtain a good performance when learning from only the limited pool of6successful robot demonstrations. Existing offline RL methods, such as IQL [40] and a Transformer-based method such as Decision Transformer [32], can learn from both successful demonstrations andfailed episodes, and show better performance compared to RT-1, though by a relatively small margin.Q-Transformer has the highest success rate and outperforms both the behavior cloning baseline(RT-1) and offline RL baselines (Decision Transformer, IQL), exceeding the average performanceof the best-performing prior method by about 70%. This demonstrates that Q-Transformer caneffectively improve upon human demonstrations using autonomously collected sub-optimal data.Appendix G also shows that Q-Transformer can be successfully applied in combination with a re-cently proposed language task planner [8] to perform both affordance estimation and robot actionexecution. Q-Transformer outperforms prior methods for planning and executing long-horizon tasks.5.2 Benchmarking in simulationTraining steps Success rateQT-Opt CQLAW-OptIQLQ-Transformer (ours)Decision TransformerRT-1 BCFigure 5: Performance comparison on a simulatedpicking task.In this section, we evaluate Q-Transformer on achallenging simulated offline RL task that re-quire incorporating sub-optimal data to solvethe task. In particular, we use a visual simu-lated picking task depicted in Figure 5, wherewe have a small amount of position controlledhuman demonstrations ( 8% of the data). Thedemonstrations are replayed with noise to gen-erate more trajectories ( 92% of the data). Fig-ure 5 shows a comparison to several offline al-gorithms, such as QT-Opt with CQL [11, 29],IQL [40], AW-Opt [73], and Decision Trans-former [32], along with RT-1 using BehavioralCloning [1] on demonstrations only. As we see,algorithms that can effectively perform TD-learning to combine optimal and sub-optimaldata (such as Q-Transformer and QT-Opt) per-form better than others. BC with RT-1 is notable to take advantage of sub-optimal data. Decision Transformer is trained on both demonstrationsand sub-optimal data, but is not able to leverage the noisy data for policy improvement and doesnot end up performing as well as our method. Although IQL and AW-Opt perform TD-learning, theactor remains too close to the data and can not fully leverage the sub-optimal data. Q-Transformer isable to both bootstrap the policy from demonstrations and also quickly improve through propagatinginformation with TD-learning. We also analyze the statistical significance of the results by trainingwith multiple random seeds in Appendix F.5.3 AblationsWe perform a series of ablations of our method design choices in simulation, with results presentedin Figure 6 (left). First, we demonstrate that our choice of conservatism for Q-Transformer per-forms better than the standard CQL regularizer, which corresponds to a softmax layer on top of theQ-function outputs with a cross-entropy loss between the dataset action and the output of this soft-max [29]. This regularizer plays a similar role to the one we propose, decreasing the Q-values forout-of-distribution actions and staying closer to the behavior policy.As we see in Figure 6 (left), performance with softmax conservatism drops to around the fraction ofdemonstration episodes ( 8%). This suggests a collapse to the behavior policy as the conservatismpenalty becomes too good at constraining to the behavior policy distribution. Due to the nature of thesoftmax, pushing Q-values down for unobserved actions also pushes Q-values up for the observedactions, and we theorize this makes it difficult to keep Q-values low for sub-optimal in-distributionactions that fail to achieve high reward. Next, we show that using conservatism is important. Whenremoving conservatism entirely, we observe that performance collapses. Actions that are rare inthe dataset will have overestimated Q-values, since they are not trained by the offline Q-learningprocedure. The resulting overestimated values will propagate and collapse the entire Q-function,as described in prior work [38]. Finally, we ablate the Monte-Carlo returns and again observeperformance collapse. This demonstrates that adding information about the sampled future returnssignificantly helps in bootstrapping the training of large architectures such as Transformers.7Training steps Success rateQ-Transformer with softmaxQ-Transformer without conservatismQ-Transformer (ours)Q-Transformer without Monte-Carlon-step ablation n-step 1-step 1-step# of gradient steps 137480 582960 136920Training duration (hours) 32 163 40pick object 94% 97% 92%move object near target 88% 80% 67%Large offline dataset Q-T DT RT-1Average success rate 88% 78% 82%Figure 6: Left: Ablations: changing to softmax conservatism decreases performance. RemovingMC returns or conservatism completely collapse performance. Top Right: Then-step return ver-sion of our method reaches similar performance to the standard version with 4 times fewer steps,indicating that the added bias from n-step returns is small compared to the gain in training speed.Usingn-step return also leads to better performance on tasks that have longer horizon, e.g. moveobject near target .Bottom Right: Success rates on real world task categories with a larger dataset.We also ablate the choice of n-step returns from the Section 4.3 on real robots and observe that usingn-step returns leads to a significantly faster training speed as measured by the number of gradientsteps and wall clock time compared to using 1-step returns, with a minimal loss in performance, asshown in Figure 6 (top right).5.4 Massively scaling up Q-TransformerThe experiments in the previous section used a large dataset that included successful demonstrationsand failed autonomous trials, comparable in size to some of the largest prior experiments that utilizeddemonstration data [74, 15, 58]. We also carry out a preliminary experiment with a much largerdataset to investigate the performance of Q-Transformer as we scale up the dataset size.This experiment includes all of the data collected with 13 robots and comprises of the demonstrationsused by RT-1 [1] and successful autonomous episodes, corresponding to about 115,000 successfultrials, and an additional 185,000 failed autonomous episodes, for a total dataset size of about 300,000trials. Model architecture and hyperparameters were kept exactly the same, as the computational costof the experiment made further hyperparameter tuning prohibitive (in fact, we only train the modelsonce). Note that with this number of successful demonstrations, even standard imitation learningwith the RT-1 architecture already performs very well, attaining 82% success rate. However, asshown in Figure 6 (bottom right), Q-Transformer was able to improve even on this very high number.This experiment demonstrates that Q-Transformer can continue to scale to extremely large datasetsizes, and continues to outperform both imitation learning with RT-1 and Decision Transformer.6 Limitations and DiscussionIn this paper, we introduced the Q-Transformer, an architecture for offline reinforcement learningwith high-capacity Transformer models that is suitable for large-scale multi-task robotic RL. Ourframework does have several limitations. First, we focus on sparse binary reward tasks correspond-ing to success or failure for each trial. While this setup is reasonable for a broad range of episodicrobotic manipulation problems, it is not universal, and we expect that Q-Transformer could be ex-tended to more general settings as well in the future.Second, the per-dimension action discretization scheme that we employ may become more cumber-some in higher dimensions (e.g., controlling a humanoid robot), as the sequence length and inferencetime for our model increases with action dimensionality. Although n-step returns mitigate this toa degree, the length of the sequences still increases with action dimensionality. For such higher-dimensional action space, adaptive discretization methods might also be employed, for example bytraining a discrete autoencoder model and reducing representation dimensionality. Uniform actiondiscretization can also pose problems for manipulation tasks that require a large range of motiongranularities, e.g. both coarse and fine movements. In this case, adaptive discretization based on thedistribution of actions could be used for representing both types of motions.Finally, in this work we concentrated on the offline RL setting. However, extending Q-Transformerto online finetuning is an exciting direction for future work that would enable even more effectiveautonomous improvement of complex robotic policies.8References[1] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXivpreprint arXiv:2212.06817 , 2022.[2] Y . Jiang, A. Gupta, Z. Zhang, G. Wang, Y . Dou, Y . Chen, L. Fei-Fei, A. Anandkumar, Y . Zhu,and L. Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprintarXiv:2210.03094 , 2022.[3] A. Gupta, L. Fan, S. Ganguli, and L. Fei-Fei. Metamorph: Learning universal controllers withtransformers. arXiv preprint arXiv:2203.11931 , 2022.[4] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-actor: A multi-task transformer for roboticmanipulation. arXiv preprint arXiv:2209.05451 , 2022.[5] N. M. M. Shafiullah, Z. J. Cui, A. Altanzaya, and L. Pinto. Behavior transformers: Cloning kmodes with one stone. arXiv preprint arXiv:2206.11251 , 2022.[6] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.[7] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Proceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[8] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as I can, not as I say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[9] D.-A. Huang, Y .-W. Chao, C. Paxton, X. Deng, L. Fei-Fei, J. C. Niebles, A. Garg, and D. Fox.Motion reasoning for goal-based imitation learning. In 2020 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 4878–4884. IEEE, 2020.[10] D. Shah, A. Sridhar, A. Bhorkar, N. Hirose, and S. Levine. Gnm: A general navigation modelto drive any robot. arXiv preprint arXiv:2210.03370 , 2022.[11] D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly,M. Kalakrishnan, V . Vanhoucke, et al. Scalable deep reinforcement learning for vision-basedrobotic manipulation. In Conference on Robot Learning , pages 651–673. PMLR, 2018.[12] A. X. Lee, C. M. Devin, Y . Zhou, T. Lampe, K. Bousmalis, J. T. Springenberg, A. Byravan,A. Abdolmaleki, N. Gileadi, D. Khosid, et al. Beyond pick-and-place: Tackling robotic stack-ing of diverse shapes. In 5th Annual Conference on Robot Learning , 2021.[13] S. Cabi, S. G. Colmenarejo, A. Novikov, K. Konyushkova, S. Reed, R. Jeong, K. Zolna, Y . Ay-tar, D. Budden, M. Vecerik, et al. Scaling data-driven robotics with reward sketching and batchreinforcement learning. arXiv preprint arXiv:1909.12200 , 2019.[14] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. MT-opt: Continuous multi-task robotic reinforcement learning at scale. arXiv ,2021.[15] A. Kumar, A. Singh, F. Ebert, Y . Yang, C. Finn, and S. Levine. Pre-training for robots: Offlinerl enables learning new tasks from a handful of trials. arXiv preprint arXiv:2210.05178 , 2022.[16] L. P. Kaelbling. Learning to achieve goals. In R. Bajcsy, editor, IJCAI , pages 1094–1099.Morgan Kaufmann, 1993. ISBN 1-55860-300-X.[17] Y . Chebotar, K. Hausman, Y . Lu, T. Xiao, D. Kalashnikov, J. Varley, A. Irpan, B. Eysenbach,R. Julian, C. Finn, et al. Actionable models: Unsupervised offline reinforcement learning ofrobotic skills. arXiv preprint arXiv:2104.07749 , 2021.9[18] T. Schaul, D. Horgan, K. Gregor, and D. Silver. Universal value function approximators. InF. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on MachineLearning , volume 37 of Proceedings of Machine Learning Research , pages 1312–1320, Lille,France, 07–09 Jul 2015. PMLR.[19] M. Andrychowicz, D. Crow, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin,P. Abbeel, and W. Zaremba. Hindsight experience replay. In Advances in Neural InformationProcessing Systems , pages 5048–5058, 2017.[20] K. Fang, A. Toshev, L. Fei-Fei, and S. Savarese. Scene memory transformer for embodiedagents in long-horizon tasks. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 538–547, 2019.[21] P. Anderson, A. X. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V . Koltun, J. Kosecka,J. Malik, R. Mottaghi, M. Savva, and A. R. Zamir. On evaluation of embodied naviga-tion agents. CoRR , abs/1807.06757, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1807.html#abs-1807-06757 .[22] P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. S ̈underhauf, I. D. Reid, S. Gould,and A. van den Hengel. Vision-and-language navigation: Interpreting visually-groundednavigation instructions in real environments. CoRR , abs/1711.07280, 2017. URL http://arxiv.org/abs/1711.07280 .[23] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil, R. Goroshin,L. Sifre, K. Kavukcuoglu, D. Kumaran, and R. Hadsell. Learning to navigate in complexenvironments. CoRR , abs/1611.03673, 2016. URL http://dblp.uni-trier.de/db/journals/corr/corr1611.html#MirowskiPVSBBDG16 .[24] Y . Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-drivenvisual navigation in indoor scenes using deep reinforcement learning. In ICRA , pages 3357–3364. IEEE, 2017. ISBN 978-1-5090-4633-1. URL http://dblp.uni-trier.de/db/conf/icra/icra2017.html#ZhuMKLGFF17 .[25] T. Yu, D. Quillen, Z. He, R. Julian, K. Hausman, C. Finn, and S. Levine. Meta-world: Abenchmark and evaluation for multi-task and meta reinforcement learning. In Conference onrobot learning , pages 1094–1100. PMLR, 2020.[26] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine. D4rl: Datasets for deep data-drivenreinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[27] A. Kumar, A. Singh, S. Tian, C. Finn, and S. Levine. A workflow for offline model-free roboticreinforcement learning. arXiv preprint arXiv:2109.10813 , 2021.[28] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning . MIT Press, Cambridge,MA, USA, 1st edition, 1998. ISBN 0262193981.[29] A. Kumar, A. Zhou, G. Tucker, and S. Levine. Conservative q-learning for offline reinforce-ment learning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[30] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller.Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 , 2013.[31] A. Wilcox, A. Balakrishna, J. Dedieu, W. Benslimane, D. S. Brown, and K. Goldberg. Montecarlo augmented actor-critic for sparse reward deep reinforcement learning from suboptimaldemonstrations. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neu-ral Information Processing Systems , 2022. URL https://openreview.net/forum?id=FLzTj4ia8BN .[32] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, andI. Mordatch. Decision transformer: Reinforcement learning via sequence modeling. Advancesin neural information processing systems , 34:15084–15097, 2021.10[33] K.-H. Lee, O. Nachum, M. Yang, L. Lee, D. Freeman, W. Xu, S. Guadarrama, I. Fis-cher, E. Jang, H. Michalewski, et al. Multi-game decision transformers. arXiv preprintarXiv:2205.15241 , 2022.[34] N. Jaques, A. Ghandeharioun, J. H. Shen, C. Ferguson, A. Lapedriza, N. Jones, S. Gu, andR. Picard. Way off-policy batch deep reinforcement learning of implicit human preferences indialog. arXiv preprint arXiv:1907.00456 , 2019.[35] Y . Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning. arXivpreprint arXiv:1911.11361 , 2019.[36] X. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple andscalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177 , 2019.[37] N. Y . Siegel, J. T. Springenberg, F. Berkenkamp, A. Abdolmaleki, M. Neunert, T. Lampe,R. Hafner, and M. Riedmiller. Keep doing what worked: Behavioral modelling priors foroffline reinforcement learning. arXiv preprint arXiv:2002.08396 , 2020.[38] A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine. Stabilizing off-policy q-learning viabootstrapping error reduction. In Advances in Neural Information Processing Systems , pages11761–11771, 2019.[39] I. Kostrikov, J. Tompson, R. Fergus, and O. Nachum. Offline reinforcement learning withfisher divergence critic regularization. arXiv preprint arXiv:2103.08050 , 2021.[40] I. Kostrikov, A. Nair, and S. Levine. Offline reinforcement learning with implicit q-learning.arXiv preprint arXiv:2110.06169 , 2021.[41] Z. Wang, A. Novikov, K. ̇Zołna, J. T. Springenberg, S. Reed, B. Shahriari, N. Siegel, J. Merel,C. Gulcehre, N. Heess, et al. Critic regularized regression. arXiv preprint arXiv:2006.15134 ,2020.[42] S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning. arXivpreprint arXiv:2106.06860 , 2021.[43] X. Chen, Z. Zhou, Z. Wang, C. Wang, Y . Wu, and K. Ross. Bail: Best-action imitation learningfor batch deep reinforcement learning, 2019. URL https://arxiv.org/abs/1910.12179 .[44] H. Furuta, Y . Matsuo, and S. S. Gu. Generalized decision transformer for offline hindsightinformation matching. In International Conference on Learning Representations , 2022. URLhttps://openreview.net/forum?id=CAjxVodl_v .[45] Y . Jang, J. Lee, and K.-E. Kim. GPT-critic: Offline reinforcement learning for end-to-end task-oriented dialogue systems. In International Conference on Learning Representations , 2022.URL https://openreview.net/forum?id=qaxhBG1UUaS .[46] L. Meng, M. Wen, Y . Yang, chenyang le, X. yun Li, H. Zhang, Y . Wen, W. Zhang, J. Wang,and B. XU. Offline pre-trained multi-agent decision transformer, 2022. URL https://openreview.net/forum?id=W08IqLMlMer .[47] P. Daoudi, M. Barlier, L. D. Santos, and A. Virmaux. Density estimation for conservativeq-learning, 2022. URL https://openreview.net/forum?id=liV-Re74fK .[48] L. Liu, Z. Tang, L. Li, and D. Luo. Robust imitation learning from corrupted demonstrations,2022. URL https://openreview.net/forum?id=UECzHrGio7i .[49] A. Mandlekar, D. Xu, J. Wong, S. Nasiriany, C. Wang, R. Kulkarni, F.-F. Li, S. Savarese,Y . Zhu, and R. Mart ́ın-Mart ́ın. What matters in learning from offline human demonstrationsfor robot manipulation. In 5th Annual Conference on Robot Learning , 2021. URL https://openreview.net/forum?id=JrsfBJtDFdI .[50] A. Kumar, J. Hong, A. Singh, and S. Levine. When should we prefer offline reinforcementlearning over behavioral cloning? arXiv preprint arXiv:2204.05618 , 2022.11[51] A. Singh, A. Kumar, Q. Vuong, Y . Chebotar, and S. Levine. Offline rl with realistic datasets:Heteroskedasticity and support constraints. arXiv preprint arXiv:2211.01052 , 2022.[52] Q. Vuong, A. Kumar, S. Levine, and Y . Chebotar. DASCO: Dual-generator adversarial sup-port constrained offline reinforcement learning. In A. H. Oh, A. Agarwal, D. Belgrave,and K. Cho, editors, Advances in Neural Information Processing Systems , 2022. URLhttps://openreview.net/forum?id=jBTQGGy9qA- .[53] J. Li, X. Zhan, H. Xu, X. Zhu, J. Liu, and Y .-Q. Zhang. Distance-sensitive offline reinforcementlearning. arXiv preprint arXiv:2205.11027 , 2022.[54] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[55] Y . Zhang and J. Chai. Hierarchical task learning from language instructions with unified trans-formers and self-monitoring. arXiv preprint arXiv:2106.03427 , 2021.[56] A. Pashevich, C. Schmid, and C. Sun. Episodic transformer for vision-and-language naviga-tion. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages15942–15952, 2021.[57] A. Silva, N. Moorman, W. Silva, Z. Zaidi, N. Gopalan, and M. Gombolay. Lancon-learn:Learning with language to enable generalization in multi-task manipulation. IEEE Roboticsand Automation Letters , 7(2):1635–1642, 2021.[58] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. Bc-z:Zero-shot task generalization with robotic imitation learning. In Conference on Robot Learn-ing, pages 991–1002. PMLR, 2021.[59] S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robotbehavior from offline data and crowd-sourced annotation. In Conference on Robot Learning ,pages 1303–1315. PMLR, 2022.[60] M. Janner, Q. Li, and S. Levine. Reinforcement learning as one big sequence modeling prob-lem. In ICML 2021 Workshop on Unsupervised Reinforcement Learning , 2021.[61] A. Kumar, X. B. Peng, and S. Levine. Reward-conditioned policies. arXiv preprintarXiv:1912.13465 , 2019.[62] R. K. Srivastava, P. Shyam, F. Mutz, W. Ja ́skowski, and J. Schmidhuber. Training agents usingupside-down reinforcement learning. arXiv preprint arXiv:1912.02877 , 2019.[63] D. Brandfonbrener, A. Bietti, J. Buckman, R. Laroche, and J. Bruna. When does return-conditioned supervised learning work for offline reinforcement learning? arXiv preprintarXiv:2206.01079 , 2022.[64] T. Yamagata, A. Khalil, and R. Santos-Rodriguez. Q-learning decision transformer: Leverag-ing dynamic programming for conditional sequence modelling in offline rl, 2023.[65] L. Metz, J. Ibarz, N. Jaitly, and J. Davidson. Discrete sequential prediction of continuousactions for deep rl. CoRR , abs/1705.05035, 2017. URL http://dblp.uni-trier.de/db/journals/corr/corr1705.html#MetzIJD17 .[66] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. 9(8):1735–1780, 1997.[67] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction . Second edition, 2018.[68] D. Cer, Y . Yang, S.-y. Kong, N. Hua, N. Limtiaco, R. S. John, N. Constant, M. Guajardo-Cespedes, S. Yuan, C. Tar, et al. Universal sentence encoder. arXiv preprint arXiv:1803.11175 ,2018.[69] E. Perez, F. Strub, H. De Vries, V . Dumoulin, and A. Courville. Film: Visual reasoning with ageneral conditioning layer. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 32, 2018.12[70] M. Tan and Q. Le. Efficientnet: Rethinking model scaling for convolutional neural networks.InInternational conference on machine learning , pages 6105–6114. PMLR, 2019.[71] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine learning , 3(1):9–44, 1988.[72] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot,M. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning.InThirty-Second AAAI Conference on Artificial Intelligence , 2018.[73] Y . Lu, K. Hausman, Y . Chebotar, M. Yan, E. Jang, A. Herzog, T. Xiao, A. Irpan, M. Khansari,D. Kalashnikov, and S. Levine. Aw-opt: Learning robotic skills with imitation andreinforce-ment at scale. In 5th Annual Conference on Robot Learning , 2021.[74] S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In Conference on RobotLearning , pages 2071–2084. PMLR, 2021.[75] C. J. Watkins and P. Dayan. Q-learning. Machine Learning , 8:279–292, 1992.[76] T. Xiao, E. Jang, D. Kalashnikov, S. Levine, J. Ibarz, K. Hausman, and A. Herzog. Think-ing while moving: Deep reinforcement learning with concurrent control. arXiv preprintarXiv:2004.06089 , 2020.[77] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W.Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao,P. Barnes, Y . Tay, N. Shazeer, V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope,J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat,S. Dev, H. Michalewski, X. Garcia, V . Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito,D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick,A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee,Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck,J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways, 2022.URL https://arxiv.org/abs/2204.02311 .13A Proof of MDP optimization consistencyTo show that transforming MDP into a per-action-dimension form still ensures optimization of theoriginal MDP, we show that optimizing the Q-function for each action dimension is equivalent tooptimizing the Q-function for the full action.If we consider the full action a1:dAand that we switch to the state s0at the next timestep, the Q-function for optimizing over the full action MDP would be:maxa1:dAQ(s;a1:dA) = maxa1:dAR(s;a1:dA) +maxa1:dAQ(s0;a1:dA)=R(s;a1:dA) +maxa1:dAQ(s0;a1:dA); (3)whereR(s;a1:dA)is the reward we get after executing the full action.The optimization over each action dimension using our Bellman update is:maxaiQ(s;a1:i1;ai) = maxaiBQ(s;a1:i1;ai)= maxaimaxai+1Q(s;a1:i1;ai;ai+1)= maxaimaxai+1BQ(s;a1:i1;ai;ai+1)= maxaimaxai+1maxai+2Q(s;a1:i1;ai;ai+1;ai+2)=R(s;a1:dA) +maxa1Q(s0;a1)=R(s;a1:dA) +maxa1BQ(s0;a1)=R(s;a1:dA) +maxa1maxa2Q(s0;a1;a2)=R(s;a1:dA) +maxa1;a2[BQ(s0;a1;a2)]=R(s;a1:dA) +maxa1;a2maxa3Q(s0;a1;a2;a3)=R(s;a1:dA) +maxa1:dAQ(s0;a1:dA);which optimizes the original full action MDP as in Eq. 3.B Proof of convergenceConvergence of Q-learning has been shown in the past [67, 75]. Below we demonstrate thatper-action dimension Q-function converges as well, by providing a proof almost identical to thestandard Q-learning convergence proof, but extended to account for the per-action dimensionmaximization.LetdAbe the dimensionality of the action space, aindicates a possible sequence of actions, whosedimension is not necessarily equal to the dimension of the action space. That is:a2fa1:i;8idAgTo proof convergence, we can demonstrate that the Bellman operator applied to the per-action di-mension Q-function is a contraction, i.e.:jjBQ1(s;a)BQ2(s;a)jj1cjjQ1(s;a)Q2(s;a)jj1;14whereBQ(s;a) =(R(s;a) +maxa0Q(s;a;a0) if the dimension of ais less thandAR(s;a) +maxa0Es0[Q(s0;a0)]if the dimension of ais equal todAa0is the next action dimension following the sequence a,s0is the next state of the MDP, is thediscounting factor, and 0c1.Proof: We can show that this is the case as follows:Case 1: For action sequence whose dimension is less than the dimension of the action space.BQ1(s;a)BQ2(s;a)=R(s;a) +maxa0Q1(s;a;a0)R(s;a)maxa0Q2(s;a;a0)=maxa0[Q1(s;a;a0)Q2(s;a;a0)]sups;a[Q1(s;a)Q2(s;a)]=)jjBQ1(s;a)BQ2(s;a)jj1jjQ1(s;a)Q2(s;a)jj1where sups;ais the supremum over all action sequences, with 01andjjfjj1= supx[f(x)].Case 2: For action sequence whose dimension is equal to the dimension of the action spaceBQ1(s;a)BQ2(s;a)=R(s;a) +maxa0Es0[Q1(s0;a0)]R(s;a)maxa0Es0[Q2(s0;a0)]=maxa0Es0[Q1(s0;a0)Q2(s0;a0)]sups;a[Q1(s;a)Q2(s;a)]=)jjBQ1(s;a)BQ2(s;a)jj1jjQ1(s;a)Q2(s;a)jj1C Analysis of the conservatism termWith the goal of understanding the behavior of our training procedure, we theoretically analyze thesolution obtained by Eq. 2 for the simpler cases when Qis represented as a table, and when theobjective in Eq. 2 can be minimized exactly. We derive the minimizer of the objective in Eq. 2 bydifferentiating Jwith respect to Q:8s;a;k;dJdQ(s;a)= 0(ajs)Q(s;a)BQk(s;a)+~(ajs)Q(s;a) = 0Q(s;a) ((ajs) +~(ajs)) =(ajs)BQk(s;a)Qk+1(s;a) =(ajs)(ajs) +~(ajs)|{z}:=m(s;a)BQk(s;a) (4)Eq. 4 implies that training with the objective in Eq. 2 performs a weighted Bellman backup: un-like the standard Bellman backup, training with Eq. 2 multiplies large Q-value targets by a weightm(s;a). This weight m(s;a)takes values between 0 and 1, with larger values close to 1 for in-distribution actions where (s;a)2D, and very small values close to 0 for out-of-distribution actionsaat any states(i.e., actions where (ajs)is small). Thus, the Bellman backup induced via Eq. 4should effectively prevent over-estimation of Q-values for unseen actions.15D Q-Transformer Architecture & SystemIn this section, we describe the architecture of Q-Transformer as well as the important implementa-tion and system details that make it an effective Q-learning algorithm for real robots.D.1 Transformer sequence model architectureOur neural network architecture is shown in Figure 3. The architecture is derived from the RT-1design [1], adapted to accommodate the Q-Transformer framework, and consists of a Transformerbackbone that reads in images via a convolutional encoder followed by tokenization. Since weapply Q-Transformer to a multi-task robotic manipulation problem where each task is specified bya natural language instruction, we first embed the natural language instruction into an embeddingvector via the Universal Sentence Encoder [68]. The embedding vector and images from the robotcamera are then converted into a sequence of input tokens via a FiLM EfficientNet [69, 70]. In thestandard RT-1 architecture [1], the robot action space is discretized and the Transformer sequencemodel outputs the logits for the discrete action bins per dimension and per time step. In this work,we extend the network architecture to use Q-learning by applying a sigmoid activation to the outputvalues for each action, and interpreting the resulting output after the sigmoid as Q-values. Thisrepresentation is particularly suitable for tasks with sparse per-episode rewards R2[0;1], since theQ-values may be interpreted as probabilities of task success and should always lie in the range [0;1].Note that unlike the standard softmax, this interpretation of Q-values does notprescribe normalizingacross actions (i.e., each action output can take on any value in [0;1]).Since our robotic system, described in Section D.3, has 8-dimensional actions, we end up with 8dimensions per time step and discretize each one into N= 256 value bins. Our reward function is asparse reward that assigns value 1.0 at the last step of an episode if the episode is successful and 0.0otherwise. We use a discount rate = 0:98. As is common in deep RL, we use a target network toestimate target Q-values Qk, using an exponential moving average of Q-network weights to updatethe target network. The averaging constant is set to 0:01.D.2 Conservative Q-learning implementationThe conservatism penalty in Section 4.2 requires estimating expectations under (ajs)and~(ajs)/(1(ajs)), with the latter being especially non-trivial to estimate. We employ a simpleand crude approximation that we found to work well in practice, replacing (ajs)with the empir-ical distribution corresponding, for each sampled state-action tuple (sj;aj)2D, to a Dirac deltacentered on aj, such that(ajsj) =(a=aj). This results in a simple expression for ~(ajsj)corresponding to the uniform distribution over all other actions, such that ~(ajsj)/(a6=aj).After discretizing the actions, there are N1bins per dimension to exhaustively iterate over whencomputing the conservatism term in Eq. 2, which is the same as taking the average over targets forall unseen action values. In our experiments, we find that simply setting the conservatism weight to= 1:0worked best, without additional tuning.D.3 Robot system overviewThe robot that we use in this work is a mobile manipulator with a 7-DOF arm with a 2 jaw parallelgripper, attached to a mobile base with a head-mounted RGB camera, illustrated in Figure 1. TheRGB camera provides a 640512RGB image, which is downsampled to 320256before beingconsumed by the Q-Transformer. See Figure 4 for images from the robot camera view. The learnedpolicy is set up to control the arm and the gripper of the robot. Our action space consists of 8dimensions: 3D position, 3D orientation, gripper closure command, and an additional dimensionindicating whether the episode should terminate, which the policy must trigger to receive a positivereward upon successful task completion. Position and orientation are relative to the current pose,while the gripper command is the absolute closedness fraction, ranging from fully open to fullyclosed. Orientation is represented via axis-angles, and all actions except whether to terminate arecontinuous actions discretized over their full action range in 256 bins. The termination action isbinary, but we pad it to be the same size as the other action dimensions to avoid any issues withunequal weights. The policy operates at 3 Hz, with actions executed asynchronously [76].16Algorithm 1 Temporal difference error and loss computation for one action dimension iat timestept,ait.Input Sequence of state in time window of size w,stw:t.Input Language embedding of task instruction l.Input The state at timestep t+ 1,st+1.Input Dataset action up to dimension i,fDajtgij=0.Output The loss to optimize Q-Transformer.Qtarg Compute maximum Q-values of the next action dimension using Eq. 1// Compute the maximum between Q-target and Monte Carlo return.Qtarg max( MC;Qtarg)// Compute the temporal difference error.TDError =12Q-Transformer (l;stw:t;fajgij=1)Qtarg2// Compute the conservative regularizer.// The sum is over all action bins not equal to the tokenizeddataset action.//Nis the number of discretization bin.Reg =12(N1)Pa6=DaitQ-Transformer (l;stw:t;fajgi1j=1[fag)2// Compute the loss functionL=TDError + RegReturnLas the loss function to optimize Q-Transformer with.E Pseudo-codeAlgorithm 1 shows the loss computation for training each action dimension of the Q-Transformer.We first use Eq. 1 to compute the maximum Q-values over the next action dimensions. Then wecompute the Q-target for the given dataset action by using the Bellman update with an additionalmaximization over the Monte-Carlo return and predicted maximum Q-value at the next time step.The TD-error is then computed using the Mean-Squared Error. Finally, we set a target of 0 forall discretized action bins except the dataset action and add the averaged Mean-Squared Error overthese dimensions to the TD-Error, which results in the total loss L.17F Running training for multiple random seedsFigure 7: Mean and variance of Q-Transformer and RT-1 performance in simulation when runningthe training for 5 different random seeds.In addition to performing a large amount of evaluations, we also analyze the statistical significanceof our learning results by running our training of Q-Transformer and RT-1 on multiple seeds insimulation. In particular, we run the training for 5 random seeds in Figure 7. As we can see, Q-Transformer retains its improved performance across the distribution of the random seeds.G Q-Transformer value function with a language planner experiments0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8Q-valuepick 7up canpick applepick blue chip bagpick blue plastic bottlepick brown chip bagpick coke canpick green jalapeno chip bagpick green rice chip bagpick orange canpick pepsi canpick redbull canpick rxbar blueberrypick green canpick spongepick water bottlepick rxbar chocolateopen top drawerclose top drawer QT-Opt + sim-to-realQ-Transformer w/ relabel0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7Q-valuepick 7up canpick applepick blue chip bagpick blue plastic bottlepick brown chip bagpick coke canpick green jalapeno chip bagpick green rice chip bagpick orange canpick pepsi canpick redbull canpick rxbar blueberrypick green canpick spongepick water bottlepick rxbar chocolateopen top drawerclose top drawer QT-Opt + sim-to-realQ-Transformer w/ relabelFigure 8: Qualitative comparisons of Q-values from QT-Opt (sim-to-real) and Q-Transformer. Q-Transformer outputs sharper Q-values for objects close to the robot, which can be grasped faster andmore easily than the objects farther away.Recently, the SayCan algorithm [8] was proposed as a way to combine large language models(LLMs) with learned policies and value functions to solve long-horizon tasks. In this framework,the value function for each available skill is used to determine the “affordance” of the current state18for that skill, and a large language model then selects from among the available affordances to takea step towards performing some temporally extended task. For example, if the robot is commandedto bring all the items on a table, the LLM might propose a variety of semantically meaningful items,and select from among them based on the item grasping skill that currently has a high value (cor-responding to items that the robot thinks it can grasp). SayCan uses QT-Opt in combination withsim-to-real transfer to train Q-functions for these affordances. In the following set of experiments,we demonstrate that the Q-Transformer outperforms QT-Opt for affordance estimation without us-ing any sim-to-real transfer, entirely using the real world dataset that we employ in the precedingexperiments.Model Precision Recall F1QT-Opt (sim-to-real) 0.61 0.68 0.64Q-T w/ relabel 0.76 0.89 0.82Q-T w/o relabel 0.58 0.93 0.71Table 1: Affordance estimation comparison: precision,recall and F1 score when using Q-values to determineif a task is feasible. Q-Transformer (Q-T) with multi-task relabeling consistently produces better affordanceestimates.We first benchmark Q-Transformer on theproblem of correctly estimating task affor-dances from the RT-1 dataset [1]. In addi-tion to the standard training on demonstra-tions and autonomous data, we introducea training with relabeling, which we foundparticularly useful for affordance estima-tion. During relabeling, we sample a ran-dom alternate task for a given episode. Werelabel the task name of the episode to thenewly sampled task, and set reward to 0.0.This ensures that the boundaries betweentasks are more clearly learned during train-ing. Table 1 shows comparison of performance of our model with and without relabeling as well asthe sim-to-real QT-Opt model used in SayCan [8]. Both of our models outperform the QT-Opt modelon F1 score, with the relabeled model outperforming it by a large margin. This demonstrates that ourQ-function can be effectively used for affordance estimation, even without training with sim-to-realtransfer. Visualization of the Q-values produced by our Q-function can be found in Figure 8.Method Success RateAffordance Execution Planning ExecutionQ-T w/ relabel Q-T 93 93QT-Opt (sim-to-real) RT-1 87 67Table 2: Performance on SayCan style long-horizontasks: SayCan queries Q(s;a)in planning to pick alanguage instruction, then runs a policy to execute theplan. Q-Transformer outperforms RT-1 with QT-Opt inboth planning and execution.We then use Q-Transformer in a long hori-zon SayCan style evaluation, replacingboth the sim-to-real QT-Opt model for af-fordance estimation, and the RT-1 policyfor low-level robotic control. During thisevaluation, a PaLM language model [77]is used to propose task candidates given auser query. Q-values are then used to pickthe task candidate with the highest affor-dance score, which is then executed on therobot using the execution policy. The Q-Transformer used for affordance estima-tion is trained with relabeling. The Q-Transformer used for low-level control istrained without relabeling, since we found relabeling episodes at the task level did not improve ex-ecution performance. SayCan with Q-Transformer is better at both planning the sequence of tasksand executing those plans, as illustrated in Table 2.H Real robotic manipulation tasks used in our evaluationWe include the complete list of evaluation tasks in our real robot experiments below.Drawer pick and place : pick 7up can from top drawer and place on counter, place 7up can into topdrawer, pick brown chip bag from top drawer and place on counter, place brown chip bag into topdrawer, pick orange can from top drawer and place on counter, place orange can into top drawer, pickcoke can from middle drawer and place on counter, place coke can into middle drawer, pick orangefrom middle drawer and place on counter, place orange into middle drawer, pick green rice chip bagfrom middle drawer and place on counter, place green rice chip bag into middle drawer, pick blueplastic bottle from bottom drawer and place on counter, place blue plastic bottle into bottom drawer,pick water bottle from bottom drawer and place on counter, place water bottle into bottom drawer,19pick rxbar blueberry from bottom drawer and place on counter, place rxbar blueberry into bottomdrawer.Open and close drawer : open top drawer, close top drawer, open middle drawer, close middledrawer, open bottom drawer, close bottom drawer.Move object near target : move 7up can near apple, move 7up can near blue chip bag, move applenear blue chip bag, move apple near 7up can, move blue chip bag near 7up can, move blue chip bagnear apple, move blue plastic bottle near pepsi can, move blue plastic bottle near orange, move pepsican near orange, move pepsi can near blue plastic bottle, move orange near blue plastic bottle, moveorange near pepsi can, move redbull can near rxbar blueberry, move redbull can near water bottle,move rxbar blueberry near water bottle, move rxbar blueberry near redbull can, move water bottlenear redbull can, move water bottle near rxbar blueberry, move brown chip bag near coke can, movebrown chip bag near green can, move coke can near green can, move coke can near brown chipbag, move green can near brown chip bag, move green can near coke can, move green jalapeno chipbag near green rice chip bag, move green jalapeno chip bag near orange can, move green rice chipbag near orange can, move green rice chip bag near green jalapeno chip bag, move orange can neargreen jalapeno chip bag, move orange can near green rice chip bag, move redbull can near sponge,move sponge near water bottle, move sponge near redbull can, move water bottle near sponge, move7up can near blue blastic bottle, move 7up can near green can, move blue plastic bottle near greencan, move blue plastic bottle near 7up can, move green can near 7up can, move green can nearblue plastic bottle, move apple near brown chip bag, move apple near green jalapeno chip bag, movebrown chip bag near green jalapeno chip bag, move brown chip bag near apple, move green jalapenochip bag near apple, move green jalapeno chip bag near brown chip bag.20 |
w5ONmpgnfG | One-Shot Imitation Learning:A Pose Estimation PerspectivePietro VitielloThe Robot Learning LabImperial College Londonpv2017@ic.ac.ukKamil DreczkowskiThe Robot Learning LabImperial College Londonkrd115@ic.ac.ukEdward JohnsThe Robot Learning LabImperial College Londone.johns@imperial.ac.ukSection 4.1 and 4.2: Effect of Pose Estimation Errors on Task Success RatesSection 4.3:Benchm arking on Real-World TasksSection 4.4:Spatial GeneralisationSection 4.1:Effect of Calibration Errors on Task Success RatesTδFigure 1: We model one-shot imitation learning astrajectory transfer , where we use unseenobject pose estimation to adapt an end-effector trajectory from a single demonstration, to a newscene where the object is in a novel pose. In this paper, we are going to study this formulationthrough a series of four investigations shown in the above boxes.Abstract: In this paper, we study imitation learning under the challenging settingof: (1) only a single demonstration, (2) no further data collection, and (3) no priortask or object knowledge. We show how, with these constraints, imitation learningcan be formulated as a combination of trajectory transfer and unseen object poseestimation. To explore this idea, we provide an in-depth study on how state-of-the-art unseen object pose estimators perform for one-shot imitation learning on tenreal-world tasks, and we take a deep dive into the effects that camera calibration,pose estimation error, and spatial generalisation have on task success rates. Forvideos, please visit www.robot-learning.uk/pose-estimation-perspective.Keywords: One-Shot Imitation Learning, Unseen Object Pose Estimation, RobotManipulation1 IntroductionImitation Learning (IL) can be a convenient and intuitive approach for teaching a robot how toperform a task. However, many of today’s methods for learning vision-based policies require tensto hundreds of demonstrations per task [1, 2, 3, 4, 5, 6]. Whilst combining with reinforcementlearning [7, 8, 9] or pre-training on similar tasks [10, 11, 12] can help, in this paper we take a lookatone-shot imitation learning , where we assume: (1) only a single demonstration, (2) no furtherdata collection following the demonstration, and (3) no prior task or object knowledge.*Joint First Author Contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.With only a single demonstration and no prior knowledge about the task or the object(s) the robot isinteracting with, the optimal imitation is one where the robot and object(s) are aligned in the sameway as during the demonstration. For example, imitating a “scoop the egg” task (see Figure 1) couldbe achieved by aligning the spatula and the egg with the same sequence of relative poses as wasprovided during the demonstration.But without any prior knowledge about the object(s), such as 3D object models, the reasoning re-quired by the robot now distils down to an unseen object pose estimation problem: the robot mustinfer the relative pose between its current observation of the object(s) and its observation during thedemonstration, in order to perform this trajectory transfer [13] (see Figure 2). Unseen object poseestimation is already a challenging field within the computer vision community [14, 15, 16, 17],and these challenges are further compounded in a robotics setting.From this standpoint, we are the first to study the utility of unseen object pose estimation for tra-jectory transfer in the context of one-shot IL, and we reveal new insights into the characteristics ofsuch a formulation and how to mitigate its challenges. We begin our study by analysing how cameracalibration and pose estimation errors affect the success rates of ten diverse real-world manipulationtasks, such as inserting a plug into a socket or placing a plate in a dishwasher rack.Following this, we estimate the pose estimation errors of eight different unseen object pose estima-tors in simulation, including one based on NOPE [18], a state-of-the-art unseen object orientationestimation method, and one based on ASpanFormer [19], a state-of-the-art correspondence estima-tion method. We then benchmark trajectory transfer using these eight unseen object pose estimatorsagainst DOME [20], a state-of-the-art one-shot IL method, on the same ten real-world tasks asmentioned above. Our results not only show that the unseen object pose estimation formulation ofone-shot IL is capable of outperforming DOME by 22% on average, but it is also applicable to amuch wider range of tasks, including those for which a third-person perspective is necessary [21].Finally, we evaluate the robustness of this formulation to changes in lighting conditions, and con-clude our study by investigating how well it generalises spatially, as an object’s pose differs to itspose during the demonstration.2 Related WorkWhilst there are many methods that study imitation learning with multiple demonstrations pertask [3, 22, 23], in this section, we set our paper within the context of existing one-shot IL methods.Trajectory Transfer . Trajectory transfer refers to adapting a demonstrated trajectory to a newtest scene. Previous work has considered how to warp a trajectory from the geometry during thedemonstration to the geometry at test time [13], focusing on non-rigid registration for manipulatingdeformable objects. However, when relying on only a single demonstration, they displayed verylocal generalisation to changes in object poses, suggesting the need for multiple demonstrations inorder to achieve greater spatial generalisation [13, 24, 25]. In contrast, we are the first to studyunseen object pose estimation for trajectory transfer, which enables spatial generalisation from onlya single demonstration.Methods that require further data collection . Since one demonstration often does not providesufficient information to satisfactorily learn a task, some methods rely on further data collection.For instance, Coarse-to-Fine approaches [26, 27] train a visual servoing policy by collecting dataaround the object in a self-supervised manner. On the other hand, FISH [9] fine-tunes a base policylearned with IL using reinforcement learning and interactions with the environment. While theseapproaches have their strengths, the additional environment interactions require time and sometimeshuman supervision. In contrast, modelling one-shot IL as unseen object pose estimation avoids theneed for real-world data collection, hence enabling scalable learning.Methods that require prior knowledge . Another way of compensating for the lack of demonstra-tion data is to leverage prior task knowledge. For instance, many IL methods require access to objectposes [28, 29, 30] or knowledge of the manipulated object categories [31], which is often impracti-cal in everyday scenarios. Another approach that assumes prior knowledge for learning tasks froma single demonstration is meta-learning [10, 11, 12, 32, 33, 34, 35]. In this paradigm, a policy ispre-trained on a set of related tasks in order to infer actions for similar tasks from a single demon-stration. However, the applicability of the learned policy is limited to tasks closely related to themeta-training dataset. Contrary to the meta-learning formulation of one-shot IL, we approach it asunseen object pose estimation, which assumes no prior knowledge and thus increases its generality.2?TrajectoryDeploymentWhat should the end-effector trajectory be?Trajectory TransferUnseen Object Pose EstimationTask ExecutionDemonstrationObservationRGB -D Image1 2 345TrajectoryRGB -D ImageTδTδTδFigure 2: Overview of our formulation for one-shot IL : (1) The robot receives a demonstration asan RGB-D image and an end-effector trajectory. (2) At deployment, the robot sees the object in anew pose and must adapt the demonstrated trajectory accordingly. (3) To do so, the robot uses unseenobject pose estimation to estimate the object transformation between demonstration and deployment.(4) It then applies this transformation to the demonstrated trajectory. (5) Ultimately, this aligns theend-effector with the same object-centric poses as experienced during the demonstration.Methods which do not require further training or prior knowledge . DOME [20] and FlowCon-trol [36] are one-shot IL algorithms that assume no prior knowledge. The effectiveness of thesemethods hinges on their reliance on a wrist-mounted camera, limiting their applicability to taskswhere hand-centric observability is sufficient [21] and in which hand-held objects do not occludethe wrist-mounted camera. In contrast, the one-shot IL formulation explored in this paper applies toa much wider spectrum of tasks, including those for which a third-person perspective is necessary[21], such as the dishwasher task considered in our experiments (Section 4).3 One-shot Imitation Learning for Robotic ManipulationIn this work, we study one-shot IL under the challenging setting when there is: (1) only a singledemonstration, (2) no further data collection, and (3) no prior task or object knowledge. This is anappealing setting to aim for, since it encourages the design of a general and efficient method. In thissection, we explore this from the perspective of object pose estimation. First, we provide a formu-lation of IL. Then, we model one-shot IL for manipulation as a trajectory transfer problem. Finally,we introduce unseen object pose estimation, which underpins the trajectory transfer problem.Trajectory transfer, illustrated in Figure 2, involves the robot adapting a demonstrated end-effector(EEF) trajectory for deployment. This is done by estimating the relative object pose using unseenobject pose estimation from a pair of RGB-D images captured at the beginning of the demonstrationand deployment phases, allowing for spatial generalisation of the demonstrated trajectory.3.1 Imitation Learning FormulationWe observe demonstrations as trajectories = (fxtgTt=1;s), wherexrepresents the state of thesystem,Ta finite horizon, and sthe context vector. The state xencompasses various measurementsrelevant to the task and should include sufficient information for inferring optimal actions. In thecontext of robotic manipulation, the state could be the EEF and/or object(s) pose(s). Similarly,the context vector scaptures various information regarding the conditions the demonstration wasrecorded under, and can assume different forms, ranging from an image of the task space capturedbefore the demonstration to a task identifier in a multi-task setting.Given a dataset of demonstrated trajectories D=figNi=1, IL aims to learn a policy that satisfiesthe objective:= arg minD(q(x);p(x)); (1)whereq(x)andp(x)are the distributions of states induced by the demonstrator and policy respec-tively, andD(q;p)is a distance measure between qandp.33.2 Modelling One-Shot Imitation Learning as Trajectory TransferIn this section, we model one-shot IL as trajectory transfer (see Figure 2), which we define as theprocess at test time of moving the EEF, or a grasped object, to the same set of relative poses, withrespect to a target object, that it had during the demonstration. Let Rdefine the frame of the robot andEtthat of the EEF at time step t. A homogeneous transformation matrix TABrepresents frame Bexpressed in frame A. During demonstrations, the robot receives instructions through teleoperationor kinesthetic teaching, which are defined as:Demo=XDemoR;IDemo; (2)and comprise of an RGB-D image IDemoof the task space captured before the demonstration usingan external camera, and a sequence of EEF poses from the demonstration, XDemoR =fTDemoREtgTt=1,expressed in the robot frame.With only a single demonstration and no prior knowledge about the object the robot is interactingwith, the optimal imitation can be considered to be where the robot and object(s) are aligned in thesame way as during the demonstration, throughout the task. For example, imitating grasping a mug(see Figure 2), could be achieved by aligning the EEF in such a way that the relative pose betweenthe EEF and mug are the same as during the demonstration. Note that this also holds for morecomplex trajectories beyond grasping, such as insertion or twisting manoeuvres. Moreover, if thetask involves a grasped object, assuming that the latter is fixed and rigidly attached to the gripper,aligning the EEF will also align the grasped object.Now, consider Equation 1 in the context of manipulation, where the optimal policy should resultin replicating the demonstrated task state, x=TOE, whereOis the object frame, at every timestepduring deployment. This demonstrated sequence of EEF poses, expressed in the object frame, isdefined asXDemoO =fTDemoOEtgTt=1, whereTDemoOEt=TDemoORTDemoREt=TDemoRO1TDemoREt; (3)andTDemoRO is the unknown object pose during the demonstration. During deployment, for op-timal imitation, should replicate XDemoO given a novel unknown object pose TTestRO, i.e. wewould likeTTestOE =TDemoOE during every timestep of the interaction. Hence, the sequence ofEEF poses (expressed in the robot frame) that aligns with the demonstration can be defined asXTestR=fTTestREtgTt=1, whereTTestREt=TTestROTDemoOEt: (4)Substituting Equation 3 into Equation 4 yieldsTTestREt=TTestROTDemoORTDemoREt=RTTDemoREi; (5)where we defineRT=TTestROTDemoOR; (6)which represents the transformation of the object between the demonstration and deployment scenes,where we use the superscript Rto indicate thatRTis expressed in the robot frame R.This then leads to the crux of our investigation: the trajectory transfer problem, i.e. computingXTestR fromXDemoR , distils down to the problem of estimating the relative object pose between thedemonstration and deployment scenes, given a single image from each. Once this pose is estimated,controlling the EEF to follow this trajectory can simply make use of inverse kinematics. And giventhat we assume no prior object knowledge, the challenge becomes one of one-shot unseen objectpose estimation, an active field in the computer vision community [14, 15, 16, 17, 18].3.3 One-shot Unseen Object Pose Estimation for Trajectory TransferOne-shot unseen object pose estimation is concerned with estimating the relative pose of a novelobject visible in two images. Formally, let Cdenote the frame of reference of a camera, and considerone imageIDemo, taken when a novel object was at a pose TDemoCO , and a second image ITest, takenwhen that same object was at a pose TTestCO. One-shot unseen object pose estimation aims to estimatethe relative transformation between the two object poses,CT, that satisfies TTestCO =CTTDemoCO ,where we use the superscript to indicate thatCTis expressed in the camera frame C. Rearrangingthis equation yieldsCT=TTestCOTDemoCO1=TTestCOTDemoOC: (7)4Comparing Equations 6 and 7 reveals thatCTandRTboth represent the relative object pose, butare expressed in different frames of reference. In fact, after estimatingCTusing one-shot unseenobject pose estimation, we can findRTfrom the following relationship derived in Appendix A:RT=TRCCT(TRC)1=TRCCTTCR; (8)whereTRCis the pose of the camera in the robot frame. Hence, the trajectory transfer problem cannow be solved by using one-shot unseen object pose estimation to calculate the value ofCT.Examining Equation 8 reveals that there are two potential sources of error that could degrade theaccuracy ofRTand compromise performance during deployment. The first source of error isthe error in extrinsic camera calibration TRC, and the second is the error in unseen object poseestimation itselfCT, both of which we discuss and study in the following sections.4 ExperimentsPut plate in Dishwasher Insert Cap into bottle Stack Bowls Insert Plug into socket Grasp MugScoop an EggInsert bread into Toaster Pour Tea from kettle Grasp CanPlace Lid on potFigure 3: The 10 real-world tasks we use for evaluation.We now introduce ten represen-tative everyday robotics tasksthat span a broad range ofcomplexities. As depictedin Figure 3, these tasks are:placing one bowl into another(Bowls ), inserting a plug into asocket ( Plug ), grasping a mugby the handle ( Mug ), scoop-ing an egg from a pan ( Egg),inserting bread into a toaster(Toaster ), inserting a plate intoa specific slot in a dish rack(Dishwasher ), inserting a capinto a bottle ( Cap), pouring amarble from a kettle into a mug(Tea), grasping a can ( Can), andplacing a lid onto a pot ( Lid).We begin this section by studying the effect of calibration and pose estimation errors on the suc-cess rate for each of these tasks (Section 4.1). We then consider eight unseen object pose estimationmethods and estimate their pose estimation errors in simulation (Section 4.2). And finally, we bench-mark these pose estimation methods when used for trajectory transfer on the discussed real-worldtasks (Section 4.3), we study the robustness to changes in lighting (Section 4.3) and distractors (Ap-pendix G.2), and we examine the spatial generalisation capabilities of trajectory transfer (Section4.4). For videos of our experiments, please visit our website.4.1 Sensitivity Analysis of Task Success Rates to Calibration and Pose Estimation ErrorsCorrelating task success rates with calibration and pose estimation errors in the real world is chal-lenging. To establish a relationship between these errors and task success rates, we begin by pro-viding a single demonstration via kinesthetic teaching from a last-inch setting. We then measure thecorrelation between the task success rate and the starting EEF position error prior to imitating thedemonstration (see Appendix F.2). Finally, we map starting EEF position errors to either calibrationor pose estimation errors using an empirically defined mapping (see Appendix B).Specifically, for each considered task, we reset the object position, add a position error to the startingEEF pose, replay the demonstration, and note if the task execution is successful. This is repeated 10times for each position noise magnitude, with noise magnitudes starting from 0mm and increasing in2mm increments until the success rate is 0%over three consecutive noise magnitudes. This resultedin a total of approximately 1,500 real-world trajectories in order to establish the relationship betweenEEF position errors and task success rates for the considered tasks.Then, to empirically map calibration errors or pose estimation errors to these starting EEF positionerrors, only one potential source of error was considered at a time. For example, when mappingtranslation errors in calibration to starting EEF position errors, we assumed that rotation errors incalibration as well as rotation and translation errors in pose estimation are all zero, which isolatesthe effect of translation errors in calibration on task success rates.5Figure 4: Correlation between error magnitudes in either cali-bration or pose estimation, and task success rates, assuming adistance of 80 cm between the camera and the task space.The results for this experimentare shown in Figure 4, whereeach of the x-axes correspondsto a different type of error incalibration or pose estimation.The shape of the graph is iden-tical for each error type, be-cause there is a linear relation-ship between each of these er-rors and the error in starting EEFposition. From these results,we can draw two main conclu-sions. Firstly, the task successrate for all tasks is more sen-sitive to errors in pose estima-tion than errors in camera cali-bration (for both translation androtation errors), highlighting theimportance of good pose esti-mation methods for this frame-work. Secondly, given typical performance of pose estimation and calibration methods, rotationerrors are more problematic than position errors. For example, considering that rotation errors <4are more probable than position errors >2cm with today’s methods, we can see that a mere 4error in calibration or 2in pose estimation leads to an average success rate of 50%, whereasthis same success rate would require a 7cm error in calibration or 2cm in pose estimation.4.2 Pose Estimation Errors of One-Shot Unseen Object Pose Estimation MethodsTranslationError [cm]RotationError [deg]Class. 5:911:2 4:310:1ASpan. (FT) 6:013:0 6:113:4Reg. 9:815:3 9:415:3DINO 11:318:2 11:618:9ASpan. 11:517:4 11:718:8NOPE 18:817:4 18:417:2ICP 14:330:8 14:432:4GMFlow 28:424:6 27:624:5Table 1: Simulation pose estimation errors and stan-dard deviations for eight different methods.We now consider eight different unseen ob-ject pose estimation methods and evaluatetheir pose estimation errors in simulation.To this end, we generate a simulated datasetconsisting of 1100 image pairs of 55 differ-ent objects from Google Scan Object [37],using Blender [38] (see Appendix D.2).The considered methods are: 1) ICP: We usethe Open3D [39] implementation of point-to-point ICP [40]. ICP is given a total of5seconds to make each estimate, giving itenough time to try 50150different initial-isations. 2) GMFlow : We use GMFlow [41]to estimate correspondences between two RGB images and solve for the relative object poseCTwith Singular Value Decomposition (SVD) [42] using the depth image. 3) DINO : We use DINO [43]to extract descriptors for pixels in the two RGB images, and use the SciPy [44] implementation ofthe Hungarian algorithm to establish correspondences. Again, the relative poseCTis obtainedusing SVD. 4) ASpan. : We use the pre-trained ASpanFormer [19] to establish correspondences be-tween two RGB images and estimateCTusing SVD. 5) ASpan. (FT) : The ASpan. baseline, withmodel weights fine-tuned on a custom object-centric dataset generated in Blender using ShapeNetand Google Scan Objects [37]. 6) NOPE : We use the pre-trained NOPE [18] model to estimatethe relative object rotation from two images, and a heuristic that centres two partial point cloudsto predict the relative translation. 7) Reg. : We train an object-agnostic PointNet++ to regress rela-tive object orientations around the world’s vertical axis from two coloured point clouds, using datagenerated in simulation and domain randomisation. We solve for the relative translation using aheuristic that centres two partial point clouds. 8) Class. : This is equivalent to the Reg. baseline withthe exception that PointNet++ is trained to classify the relative object orientation.We also experimented with predicting the relative object translation from pairs of RGB-D imagesfor the NOPE, Reg. and the Class. baselines. However, we found that a simple heuristic that centrespartial point clouds for translation prediction had a similar performance, and thus used this duringinference. We refer the reader to Appendix C for a more detailed description of all of these methods.6Plug Pot Toaster Dishwasher Mug Egg Bottle Tea Bowls Can MeanTT (GMFlow) 10 40 0 20 40 20 20 20 60 60 29TT (ASpanFormer (FT)) 0 10 10 50 60 50 50 30 30 50 34TT (ASpanFormer) 0 10 0 20 50 50 50 40 60 70 35TT (DINO) 0 20 10 30 50 60 40 40 80 70 40TT (NOPE) 0 10 0 50 0 70 90 100 70 100 49DOME 0 10 80 0 100 70 40 90 70 100 56TT (ICP) 10 70 80 40 60 80 100 100 100 100 74TT (Class.) 20 10 90 70 100 90 100 100 80 100 76TT (Reg.) 20 30 90 70 100 90 100 100 80 100 78Mean 6.7 23.3 40 46.7 62.2 64.4 65.6 68.9 70 83.3Table 2: Real-world success rates (%), from ten trials for each combination of method and task. TT(Trajectory Transfer) is used to distinguish all the previously discussed baselines from DOME [20].The results for this experiment are shown in Table 1 (see Appendix E for an error definition anda discussion of these results). Although directly comparing these results to Figure 4 suggests thatnone of these baselines would be suitable for learning the considered tasks, in practice we found thattranslation and rotation errors in pose estimates are often coupled and partially cancel each other out,while Figure 4 only considers isolated errors. These observations are further reinforced by the strongperformance we found with these baselines in our real-world experiments.4.3 Real-World EvaluationWe now investigate if the trajectory transfer formulation of one-shot IL can learn real-world, every-day tasks, of varying tolerances.Implementation Details For trajectory transfer, we use a given unseen object pose estimator toestimateCT, and Equations 8 and 5 alongside inverse kinematics to align the robot with the firststate of the demonstration. From this state, we align the full robot trajectory with the demonstratedtrajectory, following Appendix F.2. In order to isolate the object of interest from the background,we segment it from the RGB-D image captured before the demonstration and deployment using acombination of OWL-ViT [45] and SAM [46]. Both segmented RGB-D images are subsequentlydownsampled to ensure compatibility with a given method. See Appendix F for further details.Experimental Procedure We conduct experiments using a 7-DoF Sawyer robot operating at 30 Hz.The robot is equipped with a head-mounted camera capturing 640-by-480 RGB-D images. The taskspace is defined as a 3075cm region on a table in front of the robot, which is further dividedinto 10 quadrants measuring 1515cm each. During the demonstration phase, all objects arepositioned in approximately the same location near the middle of the task space (see Figure 5), anda single demonstration is provided for each task via kinesthetic teaching from a last-inch setting.In the testing phase, the object is randomly placed within each quadrant with a random orientationdifference of up to 45relative to the demonstration orientation. We test each method on a singleobject pose within each of the quadrants, resulting in 10 evaluations per method.Results. The results for this experiment are shown in Table 2, with tasks ordered by mean successrate across methods and methods ordered by mean success rate across tasks. These results alsoinclude a comparison against DOME [20], a state-of-the-art one-shot IL method. We observe thatfor DOME, the majority of failure cases are caused by its inaccurate segmentation of target objects.Its poor performance on the dishwasher task is attributed to the fact that the demonstration had tobe started from further away, as DOME requires the object to be fully visible from a wrist-mountedcamera. As a result, DOME was beaten on average by three of the baselines.The Reg. and Class. baselines had the best performance on average, likely due to the fact thattheir training data was tailored to object manipulation (see Appendix D.1). ICP’s performancewas affected by the partial nature of the point clouds, which sometimes caused it to converge tolocal optimums. NOPE found itself out of its training distribution. Being trained on images withthe object in the centre, NOPE can confuse a relative translation for a rotation when an object isdisplaced from the image centre. DINO uses semantic descriptors, which cause keypoints to belocally similar, translating into matches that are coarse and not precise. ASpanFormer was trainedon images of entire scenes with many objects, hence expecting scenes rich with features. Therefore,predicting correspondences for a single segmented object causes this method to perform poorly.Meanwhile, we note that the fine-tuned ASpanFormer’s performance drops significantly more withthe sim-to-real gap than that of the Reg. and Class. methods. Lastly, GMFlow was found to poorlyestimate rotations as the predicted flow tended to be smooth and consistent across pixels.7Robustness to Changes in Lighting Conditions . We now focus on trajectory transfer using re-gression, the best-performing method in our real-world experiments, and analyse its robustness tochanges in lighting conditions. To this end, we rerun the real-world experiment for this method whileadditionally randomising the position, luminosity, and colour temperature of an external LED lightsource before each rollout. The results from this experiment indicate that trajectory transfer usingregression remains strong, with an average decrease in performance of only 8%when the lightingconditions are randomised significantly between the demonstration and test scene. We attribute thisstrong performance to the fact that the dataset used to train this baseline randomises lighting condi-tions between the two input images as part of domain randomisation. For full details regarding thisexperiment and its results, we refer the reader to Appendix G.1.4.4 Spatial GeneralisationDEMO 58 % 65 % 64 % 60 % 50 % 44 % 44 % 44 % 47 % 50 % Figure 5: The correlation between success rate and displacementfrom the place where the demonstration was given.Another insight that emergedfrom the real-world experi-ments is the impact of therelative object pose betweenthe demonstration and deploy-ment on the average perfor-mance of trajectory transfer.When we aggregate the suc-cess rates across all baselines,tasks and poses within each ofthe quadrants, we notice a decline in the success rate of trajectory transfer as the object pose deviatesfrom the demonstration pose. In Figure 5, we display a mug at the approximate location where allobjects were placed during demonstrations (labelled as DEMO), as well as a mug at the centre ofeach of the quadrants. The opacity of the mugs located in the different quadrants is proportional tothe average success rate for those quadrants, which is also displayed in white text. Note that whilstin this figure the orientation of the mug is fixed, experiments did randomise the orientations.The cause of this behaviour lies in the camera perspective. Specifically, even when kept at a fixedorientation, simply changing the position of an object will result in changes to its visual appearance.Moreover, contrary to the effect of errors in camera calibration (see Appendix B.1), the changes inthe visual appearance lessen as the camera is placed further away from the task space. These insightsmight seem intuitive, but for this same reason, they could be easily overlooked by researchers in thefield. As a result, for optimal spatial generalisation, we recommend providing demonstrations at thecentre of the task space, as this minimises the variations in the object appearance when the object’spose deviates from the demonstration pose.5 Discussion and LimitationsBy formulating one-shot IL using unseen object pose estimation, we are able to learn new tasks with-out prior knowledge, from a single demonstration and no further data collection. We demonstratethis from a theoretical perspective and show its potential when applied to real-world tasks.One limitation of this method is that we do not address generalisation to intra-class instances. Us-ing semantic visual correspondences [47] is a promising future direction here. Another limitationis the reliance on camera calibration. However, our analysis of calibration errors and real-worldexperiments do indicate good performance given typical calibration errors.Although the proposed method has demonstrated to be very versatile in the types of tasks it canlearn, in our setup it required a static scene. This is because the robot arm often occludes the taskspace given the head-mounted camera on our Sawyer robot, making it not possible to continuouslyestimate the object pose during deployment. However, this is a limitation of the hardware setupand not a fundamental limitation of the method. By optimising the camera placement for minimumocclusions, trajectory transfer could be deployed in a closed-loop and in dynamic scenes.Finally, the current formulation is unsuitable for tasks that depend on the relative pose between twoobjects, where neither of them is rigidly attached to the EEF. For instance, pushing an object closeto another cannot rely on the rigid transfer of the trajectory, because the latter needs to be adaptedaccording to the relative pose of the two objects. However, such tasks are fundamentally ambiguouswith only a single demonstration, and multiple demonstrations would be required.8AcknowledgmentsWe would like to thank all reviewers for their thorough and insightful feedback, which had a signif-icant impact on our paper.References[1] S. Young, D. Gandhi, S. Tulsiani, A. Gupta, P. Abbeel, and L. Pinto. Visual imitationmade easy. In J. Kober, F. Ramos, and C. Tomlin, editors, Proceedings of the 2020 Confer-ence on Robot Learning , volume 155 of Proceedings of Machine Learning Research , pages1992–2005. PMLR, 16–18 Nov 2021. URL https://proceedings.mlr.press/v155/young21a.html .[2] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-z:Zero-shot task generalization with robotic imitation learning. In 5th Annual Conference onRobot Learning , 2021. URL https://openreview.net/forum?id=8kbp23tSGYv .[3] A. Brohan et al. RT-1: Robotics Transformer for Real-World Control at Scale. In Proceedingsof Robotics: Science and Systems , Daegu, Republic of Korea, July 2023. doi:10.15607/RSS.2023.XIX.025.[4] T. Yu, T. Xiao, J. Tompson, A. Stone, S. Wang, A. Brohan, J. Singh, C. Tan, D. M, J. Peralta,K. Hausman, B. Ichter, and F. Xia. Scaling Robot Learning with Semantically Imagined Ex-perience. In Proceedings of Robotics: Science and Systems , Daegu, Republic of Korea, July2023. doi:10.15607/RSS.2023.XIX.027.[5] A. Stone, T. Xiao, Y . Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, S. Kirmani,B. Zitkovich, F. Xia, C. Finn, and K. Hausman. Open-world object manipulation using pre-trained vision-language models. In 7th Annual Conference on Robot Learning , 2023. URLhttps://openreview.net/forum?id=9al6taqfTzr .[6] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware. In K. E. Bekris, K. Hauser, S. L. Herbert, and J. Yu, editors, Robotics:Science and Systems XIX, Daegu, Republic of Korea, July 10-14, 2023 , 2023. doi:10.15607/RSS.2023.XIX.016. URL https://doi.org/10.15607/RSS.2023.XIX.016 .[7] Y . Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tunyasuvunakool, J. Kram ̃A¡r, R. Had-sell, N. de Freitas, and N. Heess. Reinforcement and imitation learning for diverse visuomotorskills. In Proceedings of Robotics: Science and Systems , Pittsburgh, Pennsylvania, June 2018.doi:10.15607/RSS.2018.XIV .009.[8] N. Das, S. Bechtle, T. Davchev, D. Jayaraman, A. Rai, and F. Meier. Model-based inversereinforcement learning from visual demonstrations. In J. Kober, F. Ramos, and C. Tomlin,editors, Proceedings of the 2020 Conference on Robot Learning , volume 155 of Proceedingsof Machine Learning Research , pages 1930–1942. PMLR, 16–18 Nov 2021. URL https://proceedings.mlr.press/v155/das21a.html .[9] S. Haldar, J. Pari, A. Rai, and L. Pinto. Teach a Robot to FISH: Versatile Imitation from OneMinute of Demonstrations. In Proceedings of Robotics: Science and Systems , Daegu, Republicof Korea, July 2023. doi:10.15607/RSS.2023.XIX.009.[10] Y . Duan, M. Andrychowicz, B. Stadie, O. Jonathan Ho, J. Schneider, I. Sutskever,P. Abbeel, and W. Zaremba. One-shot imitation learning. In I. Guyon, U. V . Luxburg,S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Ad-vances in Neural Information Processing Systems , volume 30. Curran Associates, Inc.,2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ba3866600c3540f67c1e9575e213be0a-Paper.pdf .[11] C. Finn, T. Yu, T. Zhang, P. Abbeel, and S. Levine. One-shot visual imitation learning via meta-learning. In S. Levine, V . Vanhoucke, and K. Goldberg, editors, Proceedings of the 1st AnnualConference on Robot Learning , volume 78 of Proceedings of Machine Learning Research ,pages 357–368. PMLR, 13–15 Nov 2017. URL https://proceedings.mlr.press/v78/finn17a.html .9[12] T. Yu, C. Finn, S. Dasari, A. Xie, T. Zhang, P. Abbeel, and S. Levine. One-shot imitationfrom observing humans via domain-adaptive meta-learning. In H. Kress-Gazit, S. S. Srinivasa,T. Howard, and N. Atanasov, editors, Robotics: Science and Systems XIV , Carnegie MellonUniversity, Pittsburgh, Pennsylvania, USA, June 26-30, 2018 , 2018. doi:10.15607/RSS.2018.XIV .002. URL http://www.roboticsproceedings.org/rss14/p02.html .[13] J. Schulman, J. Ho, C. Lee, and P. Abbeel. Learning from Demonstrations Through the Use ofNon-rigid Registration , pages 339–354. Springer International Publishing, Robotics Research:The 16th International Symposium ISRR, Cham, 2016. ISBN 978-3-319-28872-7. doi:10.1007/978-3-319-28872-7 20. URL https://doi.org/10.1007/978-3-319-28872-7_20.[14] G. Pitteri, A. Bugeau, S. Ilic, and V . Lepetit. 3D Object Detection and Pose Estimation ofUnseen Objects in Color Images with Local Surface Embeddings. In Asian Conference onComputer Vision , 2020.[15] M. Gou, H. Pan, H. Fang, Z. Liu, C. Lu, and P. Tan. Unseen Object 6D Pose Estimation: ABenchmark and Baselines. ArXiv , abs/2206.11808, 2022.[16] J. Wu, Y . Wang, and R. Xiong. Unseen object pose estimation via registration. In 2021IEEE International Conference on Real-time Computing and Robotics (RCAR) , pages 974–979, 2021. doi:10.1109/RCAR52367.2021.9517491.[17] K. Park, A. Mousavian, Y . Xiang, and D. Fox. LatentFusion: End-to-End Differentiable Re-construction and Rendering for Unseen Object Pose Estimation. In Proceedings of the IEEEConference on Computer Vision and Pattern Recognition , 2020.[18] V . N. Nguyen, T. Groueix, Y . Hu, M. Salzmann, and V . Lepetit. NOPE: Novel Object PoseEstimation from a Single Image. arXiv preprint arXiv:2303.13612 , 2023.[19] H. Chen, Z. Luo, L. Zhou, Y . Tian, M. Zhen, T. Fang, D. McKinnon, Y . Tsin, and L. Quan.Aspanformer: Detector-free image matching with adaptive span transformer. European Con-ference on Computer Vision (ECCV) , 2022.[20] E. Valassakis, G. Papagiannis, N. Di Palo, and E. Johns. Demonstrate Once, Imitate Imme-diately (DOME): Learning Visual Servoing for One-Shot Imitation Learning. In IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , 2022.[21] K. Hsu, M. J. Kim, R. Rafailov, J. Wu, and C. Finn. Vision-based manipulators need to alsosee from their hands. In International Conference on Learning Representations , 2022. URLhttps://openreview.net/forum?id=RJkAHKp7kNZ .[22] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-maron, M. Gim ́enez,Y . Sulsky, J. Kay, J. T. Springenberg, T. Eccles, J. Bruce, A. Razavi, A. Edwards, N. Heess,Y . Chen, R. Hadsell, O. Vinyals, M. Bordbar, and N. de Freitas. A generalist agent. Transac-tions on Machine Learning Research , 2022. ISSN 2835-8856. URL https://openreview.net/forum?id=1ikK0kHjvj . Featured Certification, Outstanding Certification.[23] V . V osylius and E. Johns. Few-shot in-context imitation learning via implicit graph align-ment. In 7th Annual Conference on Robot Learning , 2023. URL https://openreview.net/forum?id=CnKf9TyYtf2 .[24] A. X. Lee, A. Gupta, H. Lu, S. Levine, and P. Abbeel. Learning from multiple demonstrationsusing trajectory-aware non-rigid registration with applications to deformable object manip-ulation. 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 5265–5272, 2015. URL https://api.semanticscholar.org/CorpusID:7736763 .[25] A. X. Lee, S. H. Huang, D. Hadfield-Menell, E. Tzeng, and P. Abbeel. Unifying scene registra-tion and trajectory optimization for learning from demonstrations with application to manipula-tion of deformable objects. 2014 IEEE/RSJ International Conference on Intelligent Robots andSystems , pages 4402–4407, 2014. URL https://api.semanticscholar.org/CorpusID:16120912 .10[26] E. Johns. Coarse-to-Fine Imitation Learning: Robot Manipulation from a Single Demonstra-tion. In IEEE International Conference on Robotics and Automation (ICRA) , 2021.[27] N. Di Palo and E. Johns. Learning Multi-Stage Tasks with One Demonstration via Self-Replay.InConference on Robot Learning (CoRL) , 2021.[28] Y . Huang, J. Silv ́erio, L. Rozo, and D. G. Caldwell. Generalized Task-Parameterized SkillLearning. In 2018 IEEE International Conference on Robotics and Automation (ICRA) , pages5667–5474, 2018. doi:10.1109/ICRA.2018.8461079.[29] J. Li, M. Cong, D. Liu, and Y . Du. Enhanced task parameterized dynamic movement primitivesby gmm to solve manipulation tasks. Robotic Intelligence and Automation , 43(2):85–95, 2023.[30] Y . Hu, M. Cui, J. Duan, W. Liu, D. Huang, A. Knoll, and G. Chen. Model predictive opti-mization for imitation learning from demonstrations. Robotics and Autonomous Systems , 163:104381, 2023.[31] B. Wen, W. Lian, K. Bekris, and S. Schaal. You Only Demonstrate Once: Category-LevelManipulation from Single Visual Demonstration. In Proceedings of Robotics: Science andSystems , New York City, NY , USA, June 2022. doi:10.15607/RSS.2022.XVIII.044.[32] A. Bonardi, S. James, and A. J. Davison. Learning one-shot imitation from humans withouthumans. IEEE Robotics and Automation Letters , 5(2):3533–3539, 2020. doi:10.1109/LRA.2020.2977835.[33] X. Yang, Y . Peng, W. Li, J. Z. Wen, and D. Zhou. Vision-based one-shot imitationlearning supplemented with target recognition via meta learning. In 2021 IEEE Interna-tional Conference on Mechatronics and Automation (ICMA) , pages 1008–1013, 2021. doi:10.1109/ICMA52036.2021.9512607.[34] Z. Mandi, F. Liu, K. Lee, and P. Abbeel. Towards more generalizable one-shot visual imitationlearning. In 2022 International Conference on Robotics and Automation (ICRA) , pages 2434–2444, 2022. doi:10.1109/ICRA46639.2022.9812450.[35] M. Sieb, Z. Xian, A. Huang, O. Kroemer, and K. Fragkiadaki. Graph-structured visual imita-tion. In L. P. Kaelbling, D. Kragic, and K. Sugiura, editors, Proceedings of the Conference onRobot Learning , volume 100 of Proceedings of Machine Learning Research , pages 979–989.PMLR, 30 Oct–01 Nov 2020. URL https://proceedings.mlr.press/v100/sieb20a.html .[36] M. Argus, L. Hermann, J. Long, and T. Brox. Flowcontrol: Optical flow based visual servoing.In2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages7534–7541, 2020. doi:10.1109/IROS45743.2020.9340942.[37] L. Downs, A. Francis, N. Koenig, B. Kinman, R. Hickman, K. Reymann, T. B. McHugh,and V . Vanhoucke. Google Scanned Objects: A High-Quality Dataset of 3D ScannedHousehold Items. In 2022 International Conference on Robotics and Automation (ICRA) ,page 2553–2560. IEEE Press, 2022. doi:10.1109/ICRA46639.2022.9811809. URL https://doi.org/10.1109/ICRA46639.2022.9811809 .[38] B. O. Community. Blender - a 3D modelling and rendering package . Blender Foundation,Stichting Blender Foundation, Amsterdam, 2018. URL http://www.blender.org .[39] Q.-Y . Zhou, J. Park, and V . Koltun. Open3D: A modern library for 3D data processing.arXiv:1801.09847 , 2018.[40] P. Besl et al. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysisand Machine Intelligence , 14(2):239–256, 1992. doi:10.1109/34.121791.[41] H. Xu, J. Zhang, J. Cai, H. Rezatofighi, and D. Tao. Gmflow: Learning optical flow viaglobal matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 8121–8130, 2022.11[42] K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of two 3-d point sets. IEEETransactions on Pattern Analysis and Machine Intelligence , PAMI-9(5):698–700, 1987. doi:10.1109/TPAMI.1987.4767965.[43] S. Amir, Y . Gandelsman, S. Bagon, and T. Dekel. Deep ViT Features as Dense Visual Descrip-tors. ECCVW What is Motion For? , 2022.[44] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski,P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman,N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, ̇I. Polat, Y . Feng, E. W.Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero,C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. NatureMethods , 17:261–272, 2020. doi:10.1038/s41592-019-0686-2.[45] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Ma-hendran, A. Arnab, M. Dehghani, Z. Shen, X. Wang, X. Zhai, T. Kipf, and N. Houlsby. Simpleopen-vocabulary object detection with vision transformers. ECCV , 2022.[46] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Dollar, and R. Girshick. Segment anything. In Proceedings of theIEEE/CVF International Conference on Computer Vision (ICCV) , pages 4015–4026, October2023.[47] D. Hadjivelichkov, S. Zwane, M. Deisenroth, L. Agapito, and D. Kanoulas. One-Shot Transferof Affordance Regions? AffCorrs! In K. Liu, D. Kulic, and J. Ichnowski, editors, Proceedingsof The 6th Conference on Robot Learning (CoRL) , volume 205 of Proceedings of MachineLearning Research , pages 550–560, 14–18 Dec 2023.[48] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm. Usac: A universal frame-work for random sample consensus. IEEE Transactions on Pattern Analysis and MachineIntelligence , 35(8):2022–2038, 2013. doi:10.1109/TPAMI.2012.257.[49] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for modelfitting with applications to image analysis and automated cartography. In M. A. Fis-chler and O. Firschein, editors, Readings in Computer Vision , pages 726–740. MorganKaufmann, San Francisco (CA), 1987. ISBN 978-0-08-051581-6. doi:https://doi.org/10.1016/B978-0-08-051581-6.50070-2. URL https://www.sciencedirect.com/science/article/pii/B9780080515816500702 .[50] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools , 2000.[51] H. W. Kuhn. The Hungarian Method for the Assignment Problem. Naval Research LogisticsQuarterly , 2(1–2):83–97, March 1955. doi:10.1002/nav.3800020109.[52] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. In I. Guyon, U. V . Luxburg, S. Bengio, H. Wallach, R. Fergus,S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Sys-tems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/d8bf84be3800d12f74d8b05e9b89836f-Paper.pdf .[53] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain randomizationfor transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 23–30, 2017. doi:10.1109/IROS.2017.8202133.[54] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva,S. Song, H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Reposi-tory. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University— Toyota Technological Institute at Chicago, 2015.12[55] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Y . Bengio andY . LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015,San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. URL http://arxiv.org/abs/1412.6980 .[56] P. Heave, 2023. URL https://polyhaven.com/ .[57] E. Olson. Apriltag: A robust and flexible visual fiducial system. In 2011 IEEE InternationalConference on Robotics and Automation , pages 3400–3407, 2011. doi:10.1109/ICRA.2011.5979561.13Appendix A Changing the Coordinate Frame of TIn the main paper, we state that the equation below changes the frame of reference of Tfrom thecamera’s frame Cto the robot’s frame R:RT=TRCCTTCR=TRCCT(TRC)1; (9)whereTRCis the pose of the camera in the robot frame. To derive this relationship, we begin byreferring to the definition ofCTpresented in Equation 7 of the main paper:CT=TTestCOTDemoOC =TTestCOTDemoCO1:Rearranging this equation yields:TTestCO =CTTDemoCO: (10)Additionally, we know thatTTestRO =TRCTTestCO: (11)By substituting Equation 10 into Equation 11, we obtain:TTestRO =TRCTTestCO=TRCCTTDemoCO (12)However,TDemoCO =TCRTDemoRO = (TRC)1TDemoRO:By substitute this relationship into Equation 12 we obtainTTestRO =TRCCTTCRTDemoRO;which can be rearranged to obtainTRCCTTCR=TTestROTDemoOR =TTestROTDemoRO1:Since the right-hand side of this equation is consistent with the definition ofRTpresented in Equa-tion 6 of the main paper, we conclude thatRT=TRCCTTCR:Appendix B Mapping End-effector Errors to Calibration and PoseEstimation ErrorsIn Section 4.1 of the main paper, we detail our experimental procedure to derive the correlationbetween task success rates and starting end-effector position errors prior to replaying a last-inchdemonstration. We then mention that we map these end-effector position errors to the correspond-ing calibration or pose estimation errors. In this section, we describe our procedure for derivingthe empirical mapping between end-effector position errors and the corresponding calibration (Ap-pendix B.1) and pose estimation errors (Appendix B.2).B.1 Errors in Camera CalibrationWe derive the empirical relationship between end-effector position errors and camera calibrationerrors using an experiment in which we repeatedly (1) sample a starting end-effector pose for ademonstration, TDemoRE , (2) sample a relative object pose between the demonstration and deploy-ment,CT, and (3) compute the accuracy with which we can calculate the corresponding startingend-effector pose for deployment, TTestRE, using trajectory transfer, after injecting controlled amountsof noise to the camera calibration matrix TRC.Experimental Procedure We begin by calibrating a head-mounted camera to a Sawyer robot in thereal world, obtaining an estimate of the camera pose:TRC= [RRCjtRC];whereRRC2SO(3)is a rotation matrix and tRC2R3is a translation vector. We then sample arandom end-effector pose TDemoRE , which can be interpreted as the end-effector pose at the beginning14of a demonstration. Further, we sample a random relative object movement expressed in the cameraframe,CT=CRjCt;whereCR2SO(3)is a rotation matrix obtained from a randomly sampled rotation vector with arotation magnitude sampled from the interval [0;45], andCt2R3is a randomly sampled trans-lation vector with a magnitude sampled from the interval [0;0:4]m. The interval [0;45]was chosenfor consistency with our real-world experiments, while the maximum magnitude of the translationis equal to half of the longest side of our workspace, and corresponds to the maximum translationthat is allowable between a demonstration and deployment when the demonstration is given with theobject at the centre of the workspace.After sampling the hypothetical end-effector pose TDemoRE and the relative object poseCT, we usethem together with the camera extrinsic matrix TRCto calculate the desired end-effector pose at testtime,TTestRE =RTestREjtTestRE;using trajectory transfer (see Equations 5 and 8 in the main paper).Once we compute the desired end-effector pose via trajectory transfer, we either perturb the rotationmatrix of the camera calibration matrix, resulting inTRC= [RRRCjtRC];whereR2SO(3)is a rotation matrix obtained from a randomly sampled rotation vector with apredetermined rotation magnitude, or perturb the translation vector of the camera calibration matrix,resulting inTRC= [RRCjtRC+t];wheret2R3is a randomly sampled vector with a predetermined magnitude.After perturbing the camera pose, we estimate the desired end-effector pose,TTestRE =RTestREjtTestRE;using trajectory transfer and the noisy camera calibration matrix TRC. Finally, we calculate the errorbetween the ground truth and estimated end-effector pose, TTestRE andTTestRE, using the followingequations:terror=tTestREtTestRE2Rerror=logRTestRERTestRET2We repeat this procedure for 1000 different randomly sampled relative object posesCT, for hy-pothetical end-effector poses TDemoRE with end-effector-to-camera distances ranging from 0.2m to1.2m, increasing in increments of 0.1m.We present the results from this experiment in Figure 6. The first two graphs focus on translationerrors in camera calibration and their impact on trajectory transfer. The bottom two graphs examinerotation errors in camera calibration and their influence on trajectory transfer.Interesting Findings The top graph of Figure 6 reveals that translation errors in camera calibrationlead to proportional errors in trajectory transfer. However, it is important to note that the magnitudeof the errors in the starting end-effector positions are smaller compared to the magnitude of thecalibration errors. Additionally, from the second graph we find that translation errors in cameracalibration do not affect rotation errors in trajectory transfer, which aligns with our expectations.Moving on to rotation errors in camera calibration (third graph in Figure 6), we notice that thetranslation error in trajectory transfer depends not only on the rotation error but also on the distancebetween the end-effector and the camera. This relationship is logical since rotations occur aroundthe camera frame, and the resulting translation induced by an error in rotation is proportional tothe distance from the frame of rotation. Hence, a camera should be placed as close as possible tothe robot’s workspace to attenuate the effects of rotation errors in camera calibration on trajectorytransfer. Furthermore, we observe that rotation errors in camera calibration (last graph in Figure 6)correspond to proportional rotation errors in trajectory transfer, although the latter are consistentlysmaller in magnitude.15Figure 6: The relationship between the end-effector-to-camera distance, the magnitude of the errorin camera calibration, and the error in trajectory transfer (i.e. the error in the calculated starting poseof the end-effector prior to replaying a demonstration during deployment).16Mapping starting end-effector errors to calibration errors To map starting end-effector positionerrors to translation errors in extrinsic calibration, we fit a first-order bivariate spline *to map end-effector-to-camera distances and end-effector position errors to corresponding camera calibrationtranslation errors (topmost graph of Figure 6). Similarly, to map starting end-effector position errorsto rotation errors in camera calibration, we fit a first-order bivariate spline to map end-effector-to-camera distances and end-effector position errors to corresponding camera calibration rotation errors(third graph of Figure 6).B.2 Errors in Pose EstimationThe empirical relationship between end-effector position errors and pose estimation errors is derivedusing a very similar experimental procedure to that outlined above. However, instead of injectingcontrolled amounts of noise to the camera calibration matrix TRC, we now instead inject noise tothe relative object poseCT.Experimental Procedure Just like in the previous experiment, we begin by calibrating a head-mounted camera to a Sawyer robot in the real world, obtaining an estimate of the camera pose TRC.We then sample a random starting end-effector pose for a demonstration, TDemoRE , and a relativeobject movement expressed in the camera frame,CT, using the same procedure as in the previousexperiment. After sampling the starting end-effector pose TDemoRE and relative object poseCT,we use them and the camera extrinsic matrix TRCto calculate the desired end-effector pose duringdeployment,TTestRE, using trajectory transfer (see Equation 5 and 8 in the main paper).Once we compute the desired end-effector pose via trajectory transfer, we either perturb the rotationmatrix of the relative object poseCT, resulting inCT=RCRjCt;whereR2SO(3)is a rotation matrix obtained from a randomly sampled rotation vector witha predetermined rotation magnitude, or perturb the translation vector of the relative object pose,resulting inCT=CRjCt+twheret2R3is a randomly sampled vector with a fixed magnitude.We then estimate the desired end-effector pose,TTestRE =RTestREjtTestRE;using trajectory transfer and the noisy relative object poseCT, and calculate the error betweenthe ground truth and estimated end-effector poses, TTestRE andTTestRE, using the same procedure asin the previous experiment. We repeat this for 1000 different randomly sampled relative objectposesCT, for hypothetical demonstrated end-effector poses TDemoRE with end-effector-to-cameradistances ranging from 0.2m to 1.2m in increments of 0.1m.We present the results from this experiment in Figure 7. Just like in Figure 6, the first two graphsfocus on translation errors in pose estimation and their impact on trajectory transfer, and the bottomtwo graphs focus on rotation errors in pose estimation and their influence on trajectory transfer.Interesting Findings The top graph of Figure 7 reveals that translation errors in pose estimates leadto equal errors in trajectory transfer. Additionally, from the second graph of Figure 7 we find thattranslation errors in pose estimates do not affect rotation errors in trajectory transfer, which alignswith our expectations.Moving on to rotation errors in pose estimation (third graph of Figure 7), we notice that the trans-lation error in trajectory transfer depends not only on the rotation error but also on the distancebetween the end-effector and the camera. This relationship is expected since rotations occur aroundthe camera frame, and the resulting translation induced by an error in rotation is proportional to thedistance from the frame of rotation. Furthermore, we observe that rotation errors in pose estimatesequal rotation errors in trajectory transfer (bottom graph of Figure 7). Finally, we note that the errorsin trajectory transfer induced by errors in pose estimation are far greater in magnitude than thoseinduced by errors in calibration .*We have experimented with fitting higher order bivariate splines. However, we have found the first-orderspline to result in the lowest root mean squared error on a validation set.17Figure 7: The relationship between the end-effector-to-camera distance, the magnitude of the errorin pose estimation, and the error in trajectory transfer (i.e. the error in the calculated starting poseof the end-effector prior to replaying a demonstration during deployment).18Mapping starting end-effector errors to pose estimation errors To map starting end-effectorposition errors to translation errors in pose estimation, we fit a first-order bivariate spline *to mapend-effector-to-camera distances and end-effector position errors to corresponding pose estimationtranslation errors (topmost graph of Figure 7). Similarly, to map starting end-effector position errorsto rotation errors in pose estimation, we fit a first-order bivariate spline to map end-effector-to-camera distances and end-effector position errors to corresponding pose estimation rotation errors(third graph of Figure 7).Appendix C Unseen Object Pose Estimation BaselinesC.1 Iterative Closest PointWe use the Open3D [39] implementation of point-to-point ICP [40] to directly estimateCT. We setthe maximum distance between correspondences to 10cm, the maximum number of ICP iterationsto 10, and allow ICP to try as many random initialisations as possible within 5 seconds. This typi-cally resulted in ICP trying approximately 100 random initialisations. We have tried increasing themaximum number of ICP iterations, but observed that it is better to try more random initialisationsthan to have more ICP iterations per initialisation.To initialise ICP, we first sample a random rotation around the z-axis in the robot frame, and mapthis rotation to the camera frame using the camera extrinsic matrix. Sampling rotations in this wayexploits the prior knowledge that objects are translated in 3DoF while being rotated only aroundthe robot’s z-axis between the demonstration and deployment , which is the case for the majority ofmanipulation tasks. To obtain the initialisation for translation, we first rotate the first partial pointcloud using the sampled rotation, and then centre the two partial point clouds. Finally, we addGaussian noise to the translation component with a standard deviation equal to 1cm.C.2 Correspondence-Estimation-Based MethodsWe explore four different methods for estimating correspondences:DINO : We use Deep ViT features [43] to establish correspondences between two RGB images.GMFlow : We use the pre-trained GMFlow [41] model to predict optical flow, which is used toestablish correspondences between two RGB images.ASpanFormer : We use the pre-trained ASpanFormer [19] model to directly predict correspon-dences between two RGB images.ASpanFormer (FT) : We fine-tune the pre-trained ASpanFormer model using an object-centricdataset generated in simulation (see Appendix D.1), and use it to directly predict correspondences.After establishing correspondences using any of these methods, we apply a filtering step to re-move outliers. To accomplish this, we leverage the Universal Sample Consensus (USAC) algo-rithm [48], which is an extension and generalisation of the Random Sample Consensus (RANSAC)algorithm [49]. Specifically, we utilise the OpenCV [50] implementation of USAC and set theRANSAC reprojection threshold to 5 pixels.Once correspondences are established and outliers have been removed, all methods rely on SingularValue Decomposition (SVD) [42] to predict the relative object poseCTusing correspondences andtheir depth values.C.2.1 DINOOur implementation of the DINO correspondence estimator begins by cropping segmented RGBimages around their segmentation masks, and resizing them so that their longest side measures 224pixels, while maintaining their original aspect ratio. Next, we extract DINO features from both seg-mented image crops using the pre-trained dino vit8model [43] with a stride of 4. To establish corre-spondences, we compute the cosine similarity between the descriptors of all patches and employ theHungarian Algorithm [51]. Once correspondences are established, we discard all correspondenceswith a cosine similarity lower than 0.1.*We attempted fitting higher order splines but found the first-order spline to result in the lowest root meansquared error on a validation set.19C.2.2 GMFlowOur implementation of the GMFlow correspondence estimator begins by cropping the two (non-segmented) RGB images around their segmentation masks, and resizing them so that the width ofthe wider image measures 128 pixels, while preserving the original aspect ratio of both images.Then, the pre-trained GMFlow [41] model is used to predict the optical flow between the two imagecrops, which we segment using the segmentation mask and map to correspondences.C.2.3 ASpanFormerAspanFormer [19] is a Transformer-based detector-free model for correspondence estimation. Ituses estimated flow maps to adaptively determine the size of the regions within which to performattention. The latter is done via their proposed Global-Local Attention (GLA) block, which allowsthem to achieve state-of-the-art performance on a variety of matching benchmarks.In this project, we use the pre-trained indoor model [19] that has been open-sourced by the authorsof the paper. The correspondence estimation pipeline begins by cropping segmented RGB imagesaround the segmentation masks. Both cropped images are then resized so that their longer sidemeasures 320 pixels, while preserving the original aspect ratio. For compatibility with the pre-trained model the shorter size is then padded with zeros, resulting in 320320 images. Bothresized RGB crops are then converted to grayscale images, which are then passed directly as inputto the pre-trained model.C.2.4 ASpanFormer (FT)We further fine-tune the pre-trained weights of the ASpanFormer model on an object-centric datasetwhich we have generated in simulation (see Appendix D.1). We have chosen to do this, as fine-tuning on an object-centric dataset should allow the model to be more in-distribution when dealingwith a robot manipulation setting compared to the original ASpanFormer model that was trained onfeature-rich scenes with multiple objects.C.3 Relative Orientation Estimation MethodsWe consider three different methods for predicting the relative object orientation, around the robot’sz-axis, from a pair of RGB-D images. These methods include NOPE [18], a recently proposed un-seen object relative orientation estimator, and a PointNet++ [52]-based regression and classificationmodels, that we have trained on an object-centric dataset generated in simulation (see AppendixD.1), relying on domain randomisation and data augmentation techniques for sim-to-real transfer[53].We note that we have tried using NOPE, and training both the regression and classification modelsto predict full 3DoF relative orientations, but found this to be not very accurate, leading to a poorperformance of the final implementation in the real world. Hence, we leverage the prior knowledgethat objects are going to be rotated only around the robot’s z-axis between the demonstration anddeployment to make the problem tractable.Once a model predicts the relative object orientation, we use a heuristic that applies this rotationto the first partial point cloud and then centres the two partial point clouds to predict the relativetranslation. We have also experimented with training another PointNet++ [52] for learning a residualcorrection to this heuristic but did not find this to bring significant improvements.C.3.1 NOPENOPE is a recently proposed unseen object relative orientation estimator. We use the pre-trainedweights provided by the author, trained on 1000 random object instances from each of the following13 categories from the ShapeNet dataset [54]: airplane, bench, cabinet, car, chair, display, lamp,loudspeaker, rifle, sofa, table, telephone, vessel. During training, NOPE trains a U-Net to predictthe embedding of a novel view of an object, given a reference image and a relative pose. Then atinference, it first takes as input a support image of an object and predicts the embedding of that objectunder many relative orientations, effectively creating templates for template matching. Then given aquery image of that same object, NOPE first computes its embedding and then finds the embedding’sdistance to all the templates, giving a distribution over the possible relative orientations between thequery and the support image. The predicted orientation will correspond to the most similar template.20To use NOPE to only predict the relative orientation around the robot’s z-axis, we only samplerotation matrices that correspond to relative orientations around the robot’s z-axis (which has thesame direction as the object’s z-axis). To be specific, we create 90 templates corresponding torotations ranging from 44:5to44:5spaced 1apart. NOPE then encodes all of the templatesas well as the query object orientation and selects the rotation whose encoding is most similar tothat of the query according to the root mean squared error. Once we have the orientation predictedby NOPE, we use the heuristic described at the beginning of this Subsection to estimate the relativetranslation, completing the process of pose estimation using NOPE.C.3.2 RegressionWe implement both the regression and the classification baselines to compare simpler approachestrained on an object-centric dataset (see Appendix D.1) to more sophisticated baselines trained onmore general datasets, such as DINO, GMFlow and the ASpanFormer. To this end, we implementa Siamese PointNet++ [52] encoder made of three set-abstraction layers and three linear layers,for a total of1:8M parameters. The encoder independently encodes the point clouds of a targetobject obtained from the demonstration and the test scene. The two output embeddings are thenconcatenated into a single 512-dimensional vector that we feed as input into a 3-layer perceptron tofuse the information together and regress the object’s rotation magnitude around the robot’s z-axis.More specifically, the network returns a normalised rotation, where 1and1correspond to45and45respectively. This multilayer perceptron is composed of layers with 256, 128 and 1 hiddennode respectively, summing up to 0:2M parameters, for a total model size of 2M parameters.The model was trained with the ADAM [55] optimiser and the mean squared error loss.The input point clouds to the Siamese PointNet++ are expressed in the robot’s frame, have a zeromean, and are downsampled to 2048 points. The features of each of the points include the point po-sition and colour. Since point positions are used as point coordinates in the PoinNet++ architecture,including them as additional features may seem like including redundant information. However, wehave found that doing so improves performance in practice. The most likely reason for this is thatPoinNet++ uses point coordinates to cluster points and aggregates features for the different clusters.Without including point positions as features, this information would not be explicitly used to derivethe global point cloud feature vector.C.3.3 ClassificationThis baseline is equivalent to the regression method described above, with the exception that thelast layer of the 3-layer perceptron does not regress the rotation but instead outputs a probabilitydistribution over 90 possible classes, where each class represents an angle between 44:5and44:5, equally spaced 1apart. This model has 2M parameters and was trained with the ADAM[55] optimiser using the binary cross entropy loss.Appendix D DatasetsWe begin this section by describing the object-centric dataset we have generated to train the regres-sion and classification models, and to fine-tune the ASpanFormer model (Appendix D.1). We thendescribe the dataset we have generated to benchmark all considered unseen object pose estimationmethods in simulation (Appendix D.2). Finally, we describe the real-world dataset we used to de-termine when to stop training the regression and classification models to bridge the sim-to-real gap(Appendix D.3).D.1 Training DatasetThe object-centric dataset used to train the regression and classification models, and to fine-tunethe ASpanFormer model, was generated using Blender [38] and consists of 226K image pairs.For each image pair, we first create a scene and import a random object from either the ShapeNetdataset [54] or the Google Scan Objects dataset [37] The object is imported at a random pose andthere is a 90% probability that its texture will also be randomised, by changing its colour and materialproperties, such as reflectivity. We then randomise the number, position, energy and strength ofexternal light sources and render an RGB-D image and segmentation mask of the object.After that, we perturb the object by rotating it around the world’s z-axis (perpendicular to the floor)by a maximum of 45either clockwise or counterclockwise and randomly displacing it somewhere21within the visible scene. Finally, we again randomise the position, energy and strength of the externallight sources, and render another RGB-D image and segmentation mask. We conclude by recordingthe relative object pose between the two scenes. Additional data generation details are summarisedin Table 3 and some examples of generated image pairs can be seen in Figure 8.Characteristic RandomisationCamera Extrinsic In simulation, we set the camera extrinsic matrix equal to that forour real-world setup. However, to account for possible calibra-tion errors, for each scene in simulation, we randomly perturbthe extrinsic matrix by a maximum of 1cm in translation and 2in orientation.Object Type Objects are chosen from ShapeNet with a probability of 85% andfrom Google Scan with 15% probability. Within these two fam-ilies, the objects are sampled uniformly, but avoiding categoriesthat were excessively out of distribution for object manipulation,such as airplanes, pistols or watercrafts.Object Position The object position is randomised by an arbitrary magnitude aslong as the object remains fully visible to the camera. We userejection sampling to ensure this.Object Orientation Once a random object pose is generated to capture the first im-age, the object’s orientation is changed by a random rotation be-tween45and45around the world’s z-axis.Object Texture The appearance of each different component of the object’smodel gets randomised with a 10% probability. In particular,if randomised, in 20% of cases, the component’s colour is setto be monochromatic and the colour could be any of the follow-ing, according to these probabilities: normal (35%), dark (15%),very dark (5%), bright (15%), very bright (5%), pale (15%), darkpale (5%), bright pale (5%). If not monochromatic, with 20%probability the randomisation is applied as a texture, where thelatter is sampled uniformly from either the Haven [56] or MIL[11] texture datasets. Additionally, the material properties getrandomised as well. More specifically with 50% probability theroughness of the material gets changed, with 30% probability themetallic properties get altered, with 30% probability the specu-larity properties, with 30% probability the material’s anisotropy,with 15% probability its sheen, and finally with 5% probabilitythe clearcoat property of the material is varied.Lighting When generating a scene, the lighting conditions get ran-domised. There are three main modes the light can be in, whoseparameters get further randomised. More specifically, the fivemodalities with their probabilities are: mostly ambient (5%),strong top shadow (30%), generic shadow (30%), very bright(5%), very dim (30%). Once the modality has been sampled,aside from the ”strong top shadow” mode, either one or twolight sources get placed in the scene. The light location, energyand ambient strength get then randomised for each of the lightsources independently, creating very diverse lighting conditions.Table 3: Detailed explanation of the various randomisation strategies applied for the process of datageneration.22Figure 8: Eight examples of image pairs generated in simulation. Each row contains two image pairs,showing the same object with a random texture and colour. The object position and orientation,as well as lighting conditions are randomised between the two images. The background is notrandomised as the objects will later be segmented out, making the choice of background irrelevant.D.2 Evaluation DatasetIn Section 4.2 of the main paper, we evaluate the performance of eight different one-shot unseenobject pose estimation methods in simulation. To evaluate these methods, we generated a separatedataset to that used for training, using objects that the ASpanFormer, regression and classificationmodels were not trained on. We divided these objects into five categories, briefly explained hereafter.Examples for each of the categories can be found in Figure 91.Non-Symmetrical (Non-Sym.) are objects that are not symmetric around any of their axes.This category has 25 objects.2.1-Symmetrical (1-Sym.) are objects that have an infinite degree of symmetry aroundtheir z-axis. This category has 10 objects.3.1-Symmetrical Geometry (1-Sym. Geo.) are objects whose geometry has an infinitedegree of symmetry around the object’s z-axis, but have a non-symmetric texture.4.N-Symmetrical (N-Sym.) are objects which have a finite degree of symmetry around theirz-axis. This category has 10 objects. For instance, a cube has a rotation symmetry of order4 around its z-axis.5.N-Symmetrical Geometry (N-Sym. Geo.) are objects which have a non-symmetricaltexture but whose geometry has a finite degree of symmetry around the object’s z-axis.This category has 5 objects.Examples of the objects that could be found in each category are the following. 1) Non-Symmetrical : kettles, mugs, shoes, caps, etc. 2) 1-Symmetrical : ramekins, bowls, vases,cups, etc. 3)1-Symmetrical Geometry : cans, tape, cylindrical medicine packages, etc. 4) N-Symmetrical : square plates, square bowls, boxes, chests, etc. 5) Lastly N-Symmetrical Geome-try: non-uniform boxes, sponges, cylindrical speakers, etc.Overall, for each different object, we generated 20 image pairs, resulting in a total dataset size of1;100image pairs.23Figure 9: Examples of image pairs generated for the evaluation dataset. The top row shows twoimage pairs for non-symmetric objects. Second row shows image pairs for (left) an 1-Symmetricalobject and (right) an 1-Symmetrical Geometry object. Bottom row shows image pairs for (left) anN-Symmetrical object and (right) an N-Symmetrical Geometry object.D.3 Real-World Validation DatasetWe collect a real-world validation dataset to obtain a criteria for early stopping when training theregression and classification models. To this end, we collect 71 image pairs of 7 different everydayobjects, including two different toasters, one pan, one pot, a wooden box, a small black box and arigid plastic container. We collect the images mimicking the data generation process in simulation.We firstly place an object on the table along with a removable AprilTag [57], which allows us toinitially record the ground truth pose and then remove the tag to collect the desired RGB-D imagewithout the tag being visible. Subsequently, we perturb the object’s pose following the same strategyas with the simulated scenes, and we record another pose and RGB-D image. Examples of thecollected data can be found in Figure 10. When training a model, we monitor its pose estimationerror on this dataset to determine when to stop the training.Figure 10: The left and right double columns show pairs of images for a wooden box and a redtoaster respectively. Within one image pair, to go from one picture to the other the object has beenrandomly translated and then randomly rotated. The first row illustrates how the ground truth relativepose has been determined, that is through the use of AprilTags. These were carefully placed so thatthey could be hidden without affecting the pose of the object, allowing for the capture of the imagesshown in the second row.24Non-Sym.1-Sym.1-Sym. Geo. N-Sym.N-Sym. Geo. MeanClass. 8:414:51:60:8 1:91:8 6:712:3 3:94:8 5:911:2ASpan. (FT) 8:116:3 3:96:4 3:23:9 6:913:8 2:04:7 6:013:0Reg. 13:717:8 1:52:8 1:20:8 12:215:3 9:714:7 9:815:3DINO 15:121:5 6:87:9 3:73:7 13:320:2 5:913:3 11:318:2Aspan. 15:321:5 7:28:7 5:56:4 12:616:9 6:511:0 11:517:4NOPE 25:316:9 1:83:9 1:91:7 24:515:4 23:115:8 18:817:4ICP 22:640:1 3:76:2 3:814:2 9:417:3 14:329:3 14:330:8GMFlow 30:923:7 21:726:1 20:915:5 3026:2 31:525:2 28:424:6Mean 16.6 4.4 3.9 13.5 10.3Table 4: Full results for the simulation benchmarking experiments regarding translation errors inpose estimation, expressed in centimetres.Non-Sym.1-Sym.1-Sym. Geo. N-Sym.N-Sym. Geo. MeanClass. 6:713:1 0:20:1 0:41:4 5:410:7 2:16:3 4:310:1ASpan. (FT) 8:617:1 3:35:8 2:63:1 6:713:3 2:26:2 6:113:4Reg. 13:718:0 0:72:9 0:30:1 11:413:9 9:314:1 9:415:3DINO 16:422:9 5:77:0 3:13:0 13:219:6 5:812:4 11:618:9ASpan 16:824:2 5:76:7 4:65:4 11:515:5 6:511:1 11:718:8NOPE 25:416:1 0:93:9 0:41:4 23:814:4 23:215:7 18:417:2ICP 24:143:4 2:63:9 3:211:8 8:416:0 13:125:9 14:432:4GMFlow 33:327:7 17:018:9 17:913:2 28:122:3 27:821 27:624:5Mean 16.4 4.2 3.7 13.3 10.05Table 5: Full results for simulation benchmarking experiments regarding rotation errors in poseestimation, expressed in degrees.Appendix E Benchmarking One-Shot Unseen Object Pose EstimatorsIn Section 4.2 of our main paper, we benchmark the eight unseen object pose estimation baselinesintroduced in Appendix C, on the simulated dataset described in Appendix D.2.Error Definition Consider a ground truth relative object pose between a pair of images,RT, anda pose estimateRT. We begin by calculating the transformation T= [Rjt]that maps the poseestimate to the ground truth pose, i.e.RT=TRT!T=RTRT1:We then define the translation and rotation error asterror=jjtjj2Rerror=jjlog(R)jj2where log is the logarithmic map for the SO(3)group.In practice, for object categories with an order of symmetry >1(see Appendix D.2), there is alwaysmore than a single ground truth pose Tfor each image pair. In those cases, we independentlycompute the translation and rotation error between the pose estimate and each of the valid relativeobject poses, and consider the pair of errors corresponding to the ground truth pose that gives rise tothe smallest rotation error.For objects with an infinite degree of symmetry around the z-axis, we create 360 ground truth relativeobject poses for each image pair. That is, we discretise the possible rotations around the z-axis into360 bins, and find a relative object pose corresponding to each possible relative rotation. For objectswith a degree of symmetry of 4 around the z-axis, we create 4 possible relative object poses perimage pair. Finally, for objects with a degree of symmetry of 2 around the z-axis, we create 2possible relative object poses per image pair.Results In Table 4 we show the full results concerning the translation error in pose estimation forthe individual object categories discussed in Appendix D.2. Similarly, we do the same for rotationerrors in Table 5.Discussion From Tables 4 and 5, we can clearly see that on average, the higher the degree of sym-metry of an object, the lower the error in the predictions of all methods. This is intuitive, as thelarger the order of symmetry of an object, the larger the possible set of correct relative pose labels.25Surprisingly, predicting the rotation for the symmetrical geometry categories has turned out to beeasier than for the corresponding categories where the visual textures are symmetric as well. How-ever, as we have only considered 5 and 10 objects each for these categories, these results may not bestatistically significant.Appendix F Trajectory Transfer Implementation DetailsF.1 Incorporating an Inductive BiasAs mentioned in Appendix C.3, when training the regression and classification models, and whenusing NOPE [18] to predict 3DoF relative orientations, we found high pose estimation errors thatcompromised real-world performance. Hence, we have chosen to train the regression and classifica-tion networks, and to use the pre-trained NOPE model, only to predict the relative object orientationaround the robot’s z-axis. This design choice is motivated by the fact that for most manipulationtasks, a test object translates in 3DoF while being rotated only around the world’s z-axis betweenthe demonstration and deployment, and the world’s and robot’s z-axes are aligned.To ensure a fair comparison between regression, classification, NOPE, and the remaining consideredbaselines, we also incorporate this predictive bias into their predictions. Specifically, consider arelative pose estimate expressed in the robot frame (see Appendix A):RT=RRjRt;whereRR2SO(3)is the relative orientation prediction andRt2R3is the relative translationprediction. To incorporate the inductive bias that the object only rotates around the robot’s z-axisinto such an estimate, we first convertRRto Euler angles, set rotations around the non-z axes tozero, and convert back to a rotation matrixR~R. Now, givenR~Rand an end-effector pose TDemoREthat we would like to align with an object at test time via trajectory transfer (see Equation 5 of themain paper), we adjust the translation component ofRTto account for the modification ofRRusing the equation:R~t=RRtDemoRER~RtDemoRE +Rt;whereR~tis the adjusted relative translation, and tDemoRE is the translation component of TDemoRE .The intuition behind this equation is that it ensures that the position of the end-effector after trajec-tory transfer with a modified pose estimateR~T= [R~RjR~t]is the same as the position that wouldhave been obtained when using the non-modified pose estimateRT.F.2 Aligning the Full TrajectoryIn theory, we could use trajectory transfer (see Equation 5 of the main paper) to solve for the fullend-effector trajectory aligned with the object at the deployment pose and could track this trajectoryusing trajectory tracking. However, in practice, we have found that this resulted in very jerky tra-jectories. Hence, instead, we use trajectory transfer to only align the first end-effector pose of thedemonstration with the deployment scene, and send the robot to that pose using inverse kinematics.From there, we align the full end-effector trajectory with the demonstrated trajectory by replayingthe demonstrated end-effector velocities expressed in the end-effector frame. Although the robotdoes not realise these velocities instantaneously, in practice, we have found this to work sufficientlywell and to perform better than using a trajectory tracking system.Appendix G Sensitivity to Non-Geometric NoiseIn this section, we focus on trajectory transfer using regression for unseen object pose estimation,which was the best-performing method in our real-world experiments (see Section 4.3 of our mainpaper), and analyse its sensitivity to distractors and changes in lighting conditions and backgrounds.G.1 Sensitivity to Changes in Lighting ConditionsTo investigate the robustness of trajectory transfer to changes in lighting conditions, we follow thesame experimental procedure as described in Section 4.3 of our main paper. That is, we divide a3075cm region on a table in front of the robot into ten quadrants measuring 1515cm and use thedemonstrations collected when carrying out the main experiment to facilitate a direct comparison tothe remaining results presented in the main paper.At test time, for each quadrant, we randomly perturb the position, luminosity and colour temperatureof an external LED light source (see Figure 11 for examples), and randomly place the test object26Figure 11: Examples of how the position, luminosity and colour temperature of an external LEDlight source have been randomly perturbed between different evaluations to study the robustness oftrajectory transfer using regression for unseen object pose estimation to changes in lighting condi-tions.within that quadrant with a random orientation between 45of the demonstrated orientation. Thisresults in ten evaluations per task.Plug Pot Toaster Dishwasher Mug Egg Bottle Tea Bowls Can MeanFixed Lighting 20 30 90 70 100 90 100 100 80 100 78Changes in Lighting 10 20 60 70 100 70 80 100 90 100 70Table 6: Real-world success rates (%) of TT (Reg.) averaged over ten trials under fixed lightingconditions and changes in lighting conditions.The results from this experiment are shown in Table 6. For reference, this table also includes theresults for trajectory transfer using regression under fixed lighting conditions from our main exper-iment which is described in Section 4.3 of our main paper. As these results illustrate, trajectorytransfer using regression displays a strong performance as lighting conditions are randomised be-tween the demonstration and test scene, with an average decrease in performance of only 8%. Weattribute the strong performance of this baseline under changes in lighting conditions to the fact thatthe dataset used to train this baseline randomises lighting conditions between the two input images,alongside the hue, value and saturation of the two images, as part of domain randomisation (seeAppendix C.3 and D.1).G.2 Sensitivity to Distractors and Changes in BackgroundFigure 12: Example of the combined performance of Owl-ViT and SAM when segmenting a toasterin cluttered scenes with different backgrounds. By isolating only the object of interest, the chosenunseen object pose estimator is unaffected by the mentioned changes.27Our formulation of trajectory transfer using unseen object pose estimation assumes segmented inputRGB-D images. Hence, the robustness of the framework to distractors and changes in the back-ground is only dependent on the used segmentation pipeline and not on the backbone pose estimatoritself. In our implementation, we use a combination of OWL-ViT [45] and SAM [46] for the seg-mentation pipeline. That is, we first query OWL-ViT for a bounding box of the test object using alanguage prompt “An image of a X”, whereXis the category of the considered test object (e.g.can, mug, toaster, plug etc.). We then crop the RGB-D image using the output bounding box andpass the cropped RGB image to SAM to obtain a segmentation mask of the target object. With thispipeline, we have observed consistent good performance even under clutter. Examples of segmen-tations of the toaster in three different cluttered scenes with three different backgrounds can be seenin Figure 12.28 |
pLCQkMojXI | Rearrangement Planning for General Part AssemblyYulong Li1Andy Zeng2Shuran Song11Columbia University2Google Deepmindhttps://general-part-assembly.github.io/Abstract: Most successes in autonomous robotic assembly have been restrictedto single target or category. We propose to investigate general part assembly, thetask of creating novel target assemblies with unseen part shapes. As a fundamentalstep to a general part assembly system, we tackle the task of determining theprecise poses of the parts in the target assembly, which we term “rearrangementplanning”. We present General Part Assembly Transformer (GPAT), a transformer-based model architecture that accurately predicts part poses by inferring how eachpart shape corresponds to the target shape. Our experiments on both 3D CADmodels and real-world scans demonstrate GPAT’s generalization abilities to noveland diverse target and part shapes.1 IntroductionNovel Target Unseen Parts Figure 1: General Part Assembly . We seek to buildautonomous robotic systems that can assemble a noveltarget with previously unseen parts. The visualizationsare actual inputs and prediction of our model.The ability to assemble new objects is a hall-mark of visuo-spatial reasoning. With the men-tal image of a novel target shape, one can ar-range possibly unseen parts at hand to createa resembling assembly, either building an alienspaceship with lego blocks or a rain shelter withstones. Building autonomous robotic systemsthat exhibit these capabilities may give rise towide range of robotics applications from au-tonomously assembling new objects in a man-ufacturing plant to building shelter in disasterresponse scenarios.Despite the interest and progress in part assembly, existing methods tend to focus on specialized partassembly consisting of fixed targets [1, 2] or seen categories [3, 4]. We propose to instead investigatethe task of general part assembly , which takes in as inputs both a target shape and a variable set of partshapes to build an assembly resembling the target. Instead of restricting to fixed objects or categories,we require the robotic system to generalize to novel target shapes without additional annotation orsupervision. Moreover, the available parts are not guaranteed to be carefully manufactured, and therobotic system has to use parts of slightly differing shapes, the non-exact parts , e.g., building a tablewith rectangular blocks given a round table as the target.The task of general part assembly is an extension of specialized part assembly that focus on fixedtargets or categories. For fixed-target assembly, the target shape information is implicitly providedto the agent. A general part assembly agent can also solve category-level part assembly by takingin as input a single instance of the category, while a typical learning method is trained on a largenumber of instances from the category [3, 4]. For example, given a single table instance, a generalpart assembly agent can assemble tables with either rectangular tabletop or round tabletop.In this work, we focus on the initial perception and planning module for general part assembly,which outputs the precise poses of the parts in the target assembly. Our key insight is to formulatethis module as a goal-conditioned shape rearrangement problem, whereby the target can be viewedas a desired 3D shape layout. Consequently, we term this module ”rearrangement planing”, which7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.aligns with established definitions previously proposed by the Task and Motion Planning (TAMP)community [5]. With this insight, the module factorizes into two steps: predict a segmentationof the target, where each segment corresponds to a part, and infer the pose of each part with poseestimation. To predict accurate segmentation of the target, a key challenge is to deal with theambiguities in the target shape due to geometrically equivalent parts (e.g., legs of a table). To inferaccurate segmentations, we propose General Part Assembly Transformer (GPAT), a transformer-based model architecture, that processes input shapes in a fine-to-coarse manner, thereby ensuringconsistent segmentation results.To train and evaluate our model, we build a benchmark based on PartNet [6], a large-scale datasetof 3D objects with part information. We programmatically generate primitive part shapes as non-exact parts. We demonstrate that GPAT generalizes well to entirely new target structures at randomorientations and novel parts that are non-exact matches on both synthetic and real-world data.In summary, our primary contributions are three-fold:•We propose the task of general part assembly to study the ability of building novel targets withunseen parts and create a benchmark based on PartNet [6].•We tackle the planning problem for general part assembly as a goal-conditioned shape rearrange-ment problem – treating part assembly as an “open-vocabulary” (i.e., vocabulary of parts) targetobject segmentation task.•We introduce General Part Assembly Transformer (GPAT) for assembly planning, which can betrained to generalize to novel and diverse target and part shapes.We believe that GPAT is an exciting step for general part assembly – we discuss both its capabilitiesand limitations in the report.2 Related WorkSpecialized Part Assembly. A number of learning-based methods have been proposed for partassembly, but they usually have limited generalization abilities, so we refer to them as special-ized part assembly. Reinforcement learning (RL) has success in building part assembly for fixedtargets [1, 2, 7, 8] or seen categories [9]. They require costly trial-and-error in real-world or physics-based environments to extend to novel targets, and often require low-level state information duringtraining and testing. Another line of work directly works with visual perception and learns shapecorrespondences, which has success in tasks like kit assembly [10, 11] and shape mating [12], butthey have not tackled part assembly which involves more complex and diverse targets and parts.Previously, part assembly with category-level generalization is tackled with models based on graphneural network (GNN) backbones [3, 4, 13]. Notably, Li et al. [3] and our method shares the samehigh-level idea of segmenting the target shape, even though their targets are represented as images.Additionally, Funk et al. [14] proposed a full robotic system based on GNN and RL to assemblearbitrary target blueprints with rectangular blocks. In this work, we propose to tackle general partassembly with novel and semantically grounded target and part shapes.Part Assembly with Object Models. Physics-based part assembly assumes precise models, and thegoal poses of the parts are explicitly given or implicitly derived. However, these requirements hinderquick generalization to novel targets and parts. We directly work with visual perception, the 3D pointclouds, to predict the precise poses of the parts. With our prediction, one may apply physics-basedassembly sequence planning [15, 16, 17, 18] and path planning [19, 20, 21, 22] to obtain a completeassembly plan.Point cloud Registration. Point cloud registration estimates the transformation matrix between twopoint clouds from different views of the same 3D scene. It is traditionally solved by optimization-based methods and recently by learning-based methods [23]. If we represent target and part shapesas point clouds, general part assembly is akin to point cloud registration, but it crucially demandsoptimizing the part poses concurrently. As demonstrated in Sec. 4, basic alterations to point cloudregistration methods can’t directly address general part assembly.23 Approach MultiScaleAttentionkMultiHeadAttentionCrossAttention× LMLP PointNet PointNet+MaxPool MLP part point clouds target point cloud segmentation prediction assembly prediction Pose Estimation Figure 2: Method Overview.Given a target point cloud Tand part pointclouds{Pi}Ni=1as inputs, where Ndenotes thenumber of input parts and varies for differentshapes, the goal of our task is to predict a 6-DoF part pose qi∈SE(3)for each input partPi,forming a final part assembly, P=ÐNi=1qi(Pi),whereqi(Pi)denotes the transformed part pointcloud. To tackle this problem, we propose tosolve part assembly in two steps: target seg-mentation (Sec. 3.1) – which utilizes GeneralPart Assembly Transformer (Sec. 3.2) to de-compose the target into disjoint segments, eachrepresenting a transformed part – and pose esti-mation (Sec. 3.3) to obtain the final part poses.3.1 Part Assembly by Target SegmentationGiven a target point cloud Tand part pointclouds{Pi}Ni=1as inputs, we want to segmentthe target point cloud such that each segmentcorresponds to a part. From the segmentation, we can infer the part pose with pose estimation, whichis a transformation from the part to the corresponding segment.Formally, we want to predict a set of disjoint segments {Ti}Ni=1such thatÐNi=1Ti=TandTi∩Tj=∅fori≠j. Further{Ti,Pi}Ni=1represents a bipartite matching between the segments and the parts.Note thatTi=∅may be empty, suggesting that the part Piis not used in the assembly. The goal oftarget segmentation is to maximize the geometric resemblance for each pair with non-empty Ti, asdefined by the following minimization problem,{Ti}Ni=1=arg minT1,...,TNN∑︁i=1,Ti≠∅minqidist(Ti,qi(Pi))wheredist is some distance metric for point clouds (e.g., chamfer distance). This may appear tobe a detour since the goal of the task is the transformation qi, which is implicitly optimized over.Nevertheless, a model can approximate min qidist(Ti,qi(Pi))by learning rotationally invariantrepresentations of point clouds to avoid optimization over part poses. Finally, we can infer a partposeqiby minimizing dist(Ti,qi(Pi)).3.2 General Part Assembly Transformer (GPAT)The input to General Part Assembly transformer is a target point cloud and a set of part point clouds,and therefore model backbones designed for a single point cloud is insufficient for the task. GPAT usesPointNet [24] to extract initial features for the point clouds, and then leverages a transformer-basedarchitecture [25] to jointly optimize over the target shape and the part shapes. To predict accuratetarget segmentation, a key challenge is the ambiguities in the target shape (e.g., the four legs of a chairare interchangeable). As a result, a target shape often admits multiple ground-truth segmentations,and a fine-grained and consistent segmentation of the target is required for successful assembly. Inlight of this, we design the GPAT layer to fully exploit the spatial structure of the target point cloudfrom a fine-to-coarse manner, which is inspired by the increasing receptive field of convolutionalneural networks [26] and progress in hierarchical feature learning for point clouds [27, 28].3More formally, let the hidden dimension for features be h. For a query feature q∈Rhand a set ofkkey features K∈Rk×h, we denote the dot-product attention operator asAttention(q,K)=Wv(K)TsoftmaxWk(K)·Wq(q)√hwhereWq,Wk,Wvare MLPs.Given a targetTand a set of parts{Pi}Ni=1, GPAT uses Pointnet to extract an initial target pointfeature v0tfor each point Xt∈T, and an initial part feature u0ifor each partPi(with max pooling).Then the features pass through LGPAT layers. At the (n+1)-th GPAT layer, we have a target pointfeature vntfor each point Xt∈T and part feature unifor each partPi. The features are updated inthree steps. The first step is multi-scale attention which is parameterized by a positive integer kanddenoted by MultiScaleAttention kin Fig. 2. It updates the target point features as follows:vnt=Attention(vnt,Nk(vnt))whereNk(vnt)is the features of the knearest neighbors of the target point Xt. GPAT graduallyincreaseskto let each point receive global information of the point cloud. The second step is themulti-head attention [25], which updates the part features. In the final step, GPAT applies attentionupdates between the target point features and part point cloud features, denoted as CrossAttention inFig. 2:vn+1t=Attention(vnt,Un)un+1i=Attention(uni,Vn)where Vndenotes all target point features and Undenotes all part point cloud features. Finally,GPAT models how likely a point Xtis matched with an input part PiasP[Xt∈Ti]=WT(vLt)·WP(uLi)ÍNj=1WT(vLt)·WP(uLj)whereWTandWPare MLP projections.Data Augmentation. In order to generalize to targets at random poses, we augment the dataset byrandomly rotating the target point cloud but keep the order of the points. Thus the ground truthsegmentation label is unchanged, which encourages the model to obtain rotationally invariant featurerepresentations for the target. We always preprocess the input parts so that their principle axes arealigned with world axes. Further, to generalize to unseen categories and non-exact parts, the datasetneeds to comprise diverse shapes. We programmatically generate rectangular and spherical primitiveshapes of various sizes. For each data sample with exact parts, we construct a new sample witheach exact part replaced with the primitive of the most similar sizes. GPAT is trained with both datasamples of exact and non-exact parts. Assemblies with non-exact parts can be found in Fig. 3.Training and Loss. To supervise GPAT, we use the per-point cross entropy loss between thepredicted distribution over all parts and the ground truth label. Denote the ground truth segmentationby{Tgti}Ni=1, and the loss function isL=−|T|∑︁t=1logP[Xt∈Tgti]For part assembly, there are often multiple ground-truth labels due to geometric equivalence be-tween parts (e.g., legs of a chair). We enumerate all permutations of labels corresponding to thegeometrically equivalent parts and adopt the lowest possible cost.3.3 Predicting Part Assembly with SegmentationGiven a set of parts {Pi}Ni=1and a segmentation of the target {Ti}Ni=1, we can find the 6-DoF partposeqi∈SE(3)for each part with pose estimation. Since the parts in our task are not necessarilyexact, we use oriented-bounding boxes to estimate part poses which is simple and robust. For eachnon-emptyTi, we use principle component analysis to find the oriented bounding boxes of PiandTito solve forqi. In practice, we improve the bounding box predictions by filtering the outliers (pointsthat are at least one standard deviation away from the center) in Ti.4Unseen Instance Unseen CategoryCanonical Pose Random Pose Canonical Pose Random PosePrecise Part Imprecise Part Precise Part Imprecise Part Precise Part Imprecise Part Precise Part Imprecise PartCD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SROpt 7.9 18.3 2.4 10.0 16.0 0.9 6.3 22.9 3.7 7.7 21.0 2.8 5.1 23.1 5.7 5.4 21.3 4.7 4.1 31.3 6.7 5.0 28.4 4.2Go-ICP 72.9 4.2 0.1 67.9 3.9 0.0 72.3 4.4 0.1 66.2 3.9 0.0 49.4 2.0 0.0 42.6 2.6 0.0 45.7 2.2 0.0 39.3 2.6 0.0GeoTF 54.3 14.5 4.1 63.4 9.5 1.6 53.7 14.9 4.2 62.6 10.1 1.9 57.3 2.8 0.1 53.8 2.3 0.0 57.4 2.9 0.1 53.9 2.9 0.2NSM 89.8 1.3 0.0 86.3 0.9 0.0 87.1 1.4 0.0 83.1 0.9 0.0 58.0 0.7 0.0 52.0 1.1 0.0 56.4 1.1 0.0 49.7 1.4 0.0DGL 21.5 45.4 10.9 18.0 51.6 14.4 86.7 1.1 0.0 75.5 1.6 0.2 27.2 13.7 1.1 22.1 18.2 0.7 48.9 1.6 0.0 45.0 1.9 0.0DGL-aug 53.4 6.7 0.6 44.3 8.1 0.5 52.3 7.0 0.6 44.0 8.7 0.2 31.3 7.3 0.1 26.4 9.3 0.3 28.4 8.5 0.2 23.9 11.1 0.3Reg 33.6 3.1 0.3 33.5 3.2 0.5 25.6 5.0 0.2 25.7 5.8 0.5 34.5 1.9 0.0 31.4 3.3 0.2 19.0 5.4 0.1 18.7 5.0 0.2TF 11.4 47.9 16.8 11.5 45.4 14.4 9.1 57.8 21.5 9.7 54.7 18.8 13.5 31.8 5.1 12.3 33.3 4.9 14.1 28.0 4.2 12.2 29.9 3.7Ours 7.6 61.6 23.2 7.2 64.8 26.0 7.8 60.8 21.7 7.8 64.3 26.0 7.1 53.4 20.1 6.6 56.3 21.7 7.6 52.2 18.8 6.9 55.6 19.8Table 1: Quantitative Results and Comparisons. We adopt three metrics: chamfer distance (CD) measuredin‰, part accuracy (PA) measured in %, and success rate (SR) measured in %.4 EvaluationTasks. For both training and quantitative evaluation, we use PartNet [6], a large-scale dataset of 3Dobjects with fine-grained and instance-level 3D part information. We use chairs, lamps, and faucetsfor training and hold out tables and displays as novel categories. We deal with the most fine-grainedlevel of PartNet segmentation, and adopt the default train/test split of the PartNet, which contains2463 instances of chairs, 1553 instances of lamps, and 510 instances of faucets. We categorizegeneralization scenarios across three dimensions.•Novel target instances or categories: We evaluate on the unseen instances of chairs, lamps, andfaucets, and two novel categories: tables and displays.•Random target poses: We evaluate on targets at either canonical orientation (as defined in thedataset) or a random orientation uniformly sampled from SO(3).•Non-exact parts: Besides exact parts from the dataset, we programatically generated rectangularand spherical blocks as non-exact parts. Sample instances can be found in Fig. 3.Metrics. For all the tasks, we measure the quality of the predicted assembly with three metrics:chamfer distance, part accuracy, and assembly success rate.•Chamfer distance (CD): Given two point clouds A,B, the chamfer distance between AandBisCD(A,B)=∑︁x∈Aminy∈B||x−y||22+∑︁y∈Bminx∈A||x−y||22We useCD(T,P)as a metric, abbreviated as CD, whereTis the target point cloud, andP=ÐNi=1qi(Pi)whereqiis the predicted pose for the i-th part. 1•Part accuracy (PA): Adopted from the previous work [4], part accuracy is defined as,1NN∑︁i=11 CD(qGTi(Pi),qi(Pi))<τp!whereqGTiis the ground truth pose of the i-th part,τp=0.01. This metric indicates thepercentage of the predicted parts that match the GT part up a certain threshold measured inchamfer distance. Due to possible geometric equivalence between parts (e.g., the legs of a chair),we enumerate all possible labels of geometrically equivalent parts to obtain different GT posesand take the highest accuracy value.•Assembly Success Rate (SR): A predicted assembly is considered successful if its part accuracy(PA) is equal to 1. We report the percentage of successful predictions out of all data samples asthe assembly success rate (SR).Algorithm comparisons. Since general part assembly is a novel task, there are no previous methodsspecifically solving the task. We adapt methods for point cloud registration and specialized partassembly and compare with variants of our method for ablation studies.1Note that in the previous work [4], ‘shape chamfer distance’ is defined differently with T=ÐNi=1qGTi(Pi).In our tasks, the target is not a union of the given parts, so the values according to our metric are usually larger.5•Opt: Covariant matrix adaptation evolution strategy (CMA-ES) [29] is used to optimize theposes of each part by minimizing the chamfer distance CD as defined above.•Go-ICP : We greedily match each part point cloud to the target point cloud using Go-ICP [30].•GeoTF : Geometric Transformer (GeoTF) [31] is one of the SoTA methods for point cloudregistration. We modify the algorithm to simultaneously optimize for all part poses.•NSM : Neural Shape Mating (NSM) [12] uses a transformer-based model to solve pairwise 3Dgeometric shape mating such as reconstruct two broken pieces of an object. We modify theiralgorithm to match each part to the target and simultaneously optimize for all part poses.•DGL : Dynamic Graph Learning (DGL) [4] tackles category-level part assembly by leveragingan iterative graph neural network backbone to regress part poses. Designed for category-levelgeneralization, DGL does not take in the target shape as an input. To adapt to our task, we includethe target encoding as a node into the graph neural network framework. DGL is trained only withtargets at canonical poses following the previous work. DGL-aug uses the same training datasetas our model, with augmentation of targets at random poses.•Reg: Instead of predicting a segmentation of the target for the subsequent pose estimation, wereplace the final dot-product segmentation layer of our model with MLPs to directly regress a6DoF pose for each part. We trained the modified model with the supervision of GT poses.•TF:As an ablation, we replace each GPAT layer with a vanilla transformer layer [25].5 Experimental ResultsTab. 1 and Fig. 3 summarizes the main quantitative and qualitative results, and the following sectionsprovide detailed discussions. Please refer to the supplementary materials for more results.Part assembly by target segmentation is more generalizable. The optimization baseline (Opt)achieves the lowest chamfer distance (CD) in some scenarios, but its part accuracy (PA) and successrate (SR) are significantly lower. Directly optimizing the part poses often result in predictions at localminima where the predicted assembly matches the contour of the target, but the assembly makes nosemantic sense. (Please refer to supplementary materials for an example.)As a classical optimization-based algorithm, Go-ICP tries to match individual part to the entire targetwhich fails with no surprises. As a learning-based method, GeoTF achieves better performancesfor seen categories, but fails nonetheless for unseen categories. Since the parts are non-exact, it ischallenging for point cloud registration methods to find suitable correspondences either in spacialcoordinates or a learned feature space.GPAT also outperforms regression-based models (DGL, DGL-aug, NSM, Reg) across all the tasks,especially at scenarios that requires more generalization (Tab. 1 and Fig. 3). For the most challeng-ing scenario, regression-based models achieve less than 1% success rate, while GPAT has 19 .8%success rate which is attained for all the test scenarios. To directly regress poses, a model needs tolearn rotationally equivariant features for the target shape. However, given non-exact parts, unseencategories, and targets at random poses, we show that the regression-based models fail to capturethe distribution of poses. They either overfit certain canonical poses and assembly structures or failto learn. In contrast, GPAT, a segmentation-based model, is trained to learn rotationally invariantrepresentations of the shapes, and thus it experiences minor performance drops facing generaliza-tion scenarios. Further, training with diverse shapes makes the representations generalizable andapplicable to new categories.GPAT is robust against targets with ambiguities. Compared to the alternative segmentation-basedmodel using vanilla transformer, GPAT achieves better results for all test scenarios, especially forunseen target categories (Tab. 1 and Fig. 3). We compare the segmentation accuracy and showthe results in Tab. 2 and visualize a typical failure case of TF in Fig. 5. When facing inputs withmulti-model ground truths (e.g., a chair with identical legs), TF is unable to produce consistentsegmentation, which hinders successful assembly prediction. With GPAT layers, we fully leveragethe point cloud structure and process the point cloud in a fine-to-coarse manner, thereby achievinglocal consistency of the segmentation predictions.6Unseen Instance Unseen Category Non-Exact Exact Non-Exact Exact Non-Exact Exact Non-Exact Exact Target OptDGL RegTFOurs DGL-aug GTCanonical Pose Random Pose Canonical Pose Random Pose Go-ICP GeoTF NSM Figure 3: Assembly Results and Comparisons . For targets at random poses, targets and predictions aretransformed to be visualized at canonical poses for better understanding. Please see Fig. 4 for more results ofrandomly oriented targets. The optimization-based approach (Opt and Go-ICP) tends to stuck at local minima.The learning-based alternatives (GeoTF, NSM, DGL, Reg) overfit the training scenarios and fail to learnrotationally equivariant features for target shapes that are necessary for accurate pose inference. The alternativesegmentation-based model that uses the vanilla transformer (TF) fails to produce consistent segmentations fortargets with geometrically equivalent parts (see Fig. 5 for a detailed example). With the multi-scale attentionlayer, GPAT fully leverages the spatial structure of the target point clouds to produce consistent segmentationsand accurate assemblies.7Figure 4: Assembly Results for Targets at Random Orientations. We show more results for the same targetsin Fig. 3 but at random orientations. GPAT is robust against the orientation of the target shape. More resultscan be found in the supplementary materials.GPAT solves generalizes well to real-world data. We use the real-world scans from redwooddataset [32] as targets and part point clouds from PartNet. As seen in Fig 6, our method producesdiverse assemblies that resemble the target. This result also illustrates how GPAT can be used toassemble different sets of parts given a single target shape from the category.Ours TF GT T ar get P arts Assembly Se gmentation Figure 5: GPAT is robust against ambiguities.Unseen Instance Unseen CategoryCanonical Pose Random Pose Canonical Pose Random PosePrc Imp Prc Imp Prc Imp Prc ImpTF 70.0 62.2 76.5 69.2 63.7 62.4 62.9 60.9Ours 76.7 70.9 76.6 71.1 69.5 69.3 69.6 69.5Table 2: Segmentation Accuracy (%)Real-world Scan Assembly Prediction Figure 6: Results on Real-world DataFailure mode analysis. GPAT is not without limitations, and Fig. 7 shows some typical failurecases. First, GPAT tends to give incorrect segmentation predictions if some parts are hidden inside alarger part (e.g., the light bulbs in a lamp) or the parts are less separable (e.g., overlapping parts of amicrowave). To solve with these issues, it is possible to introduce additional information like colorsand normals of the point clouds as inputs. Additionally, oriented bounding box can be insufficient asa pose estimation method for some parts. To tackle this problem, a learning-based pose estimationmodule can potentially replace the bounding box procedure.6 ConclusionT ar get GT Se gmentation Prediction Assembly Prediction Figure 7: Typical Failure Cases .In this work, we formulate the task of gen-eral part assembly, which focuses on buildingnovel target assemblies with diverse and un-seen parts. To plan for a general part assemblytask, we propose General Part Assembly Trans-former (GPAT) and factorizes the task into targetsegmentation and pose estimation. Our exper-iments show that GPAT performs well underall the generalization scenarios. By integratingwith an assembly sequence and path planningalgorithm, we believe that GPAT has great po-tential in building vision-based general roboticassembly systems.87 AcknowledgementWe would like to thank Huy Ha, Zhenjia Xu, Cheng Chi, and Zeyi Liu for their helpful feedbackand fruitful discussions. This work was supported in part by NSF Award #2037101, #2143601,and #2132519. The views and conclusions contained herein are those of the authors and should notbe interpreted as necessarily representing the official policies, either expressed or implied, of thesponsors.References[1] G. Thomas, M. Chien, A. Tamar, J. A. Ojea, and P. Abbeel. Learning robotic assembly from cad.In2018 IEEE International Conference on Robotics and Automation (ICRA) , pages 3524–3531.IEEE, 2018. 1, 2[2] Y. Narang, K. Storey, I. Akinola, M. Macklin, P. Reist, L. Wawrzyniak, Y. Guo, A. Mora-vanszky, G. State, M. Lu, et al. Factory: Fast contact for robotic assembly. arXiv preprintarXiv:2205.03532 , 2022. 1, 2[3] Y. Li, K. Mo, L. Shao, M. Sung, and L. Guibas. Learning 3d part assembly from a singleimage. In European Conference on Computer Vision , pages 664–682. Springer, 2020. 1, 2, 12[4] G. Zhan, Q. Fan, K. Mo, L. Shao, B. Chen, L. J. Guibas, H. Dong, et al. Generative 3d partassembly via dynamic graph learning. Advances in Neural Information Processing Systems ,33:6315–6326, 2020. 1, 2, 5, 6, 12[5] D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V. Koltun, S. Levine, J. Malik,I. Mordatch, R. Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv preprintarXiv:2011.01975 , 2020. 2[6] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. Partnet: A large-scalebenchmark for fine-grained and hierarchical part-level 3d object understanding. In Proceedingsof the IEEE/CVF conference on computer vision and pattern recognition , pages 909–918, 2019.2, 5, 12[7] S. K. S. Ghasemipour, S. Kataoka, B. David, D. Freeman, S. S. Gu, and I. Mordatch. Blocksassemble! learning to assemble with large-scale structured reinforcement learning. In Interna-tional Conference on Machine Learning , pages 7435–7469. PMLR, 2022. 2[8] O. Aslan, B. Bolat, B. Bal, T. Tumer, E. Sahin, and S. Kalkan. Assemblerl: Learning toassemble furniture from their point clouds. In 2022 IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS) , pages 2748–2753. IEEE, 2022. 2[9] M. Yu, L. Shao, Z. Chen, T. Wu, Q. Fan, K. Mo, and H. Dong. Roboassembly: Learning gener-alizable furniture assembly policy in a novel multi-robot contact-rich simulation environment.arXiv preprint arXiv:2112.10143 , 2021. 2[10] K. Zakka, A. Zeng, J. Lee, and S. Song. Form2fit: Learning shape priors for generalizable as-sembly from disassembly. In 2020 IEEE International Conference on Robotics and Automation(ICRA) , pages 9404–9410. IEEE, 2020. 2[11] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V. Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021. 2[12] Y.-C. Chen, H. Li, D. Turpin, A. Jacobson, and A. Garg. Neural shape mating: Self-supervisedobject assembly with adversarial shape priors. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 12724–12733, 2022. 2, 69[13] A. Narayan, R. Nagar, and S. Raman. Rgl-net: A recurrent graph learning framework forprogressive part assembly. In Proceedings of the IEEE/CVF Winter Conference on Applicationsof Computer Vision , pages 78–87, 2022. 2[14] N. Funk, G. Chalvatzaki, B. Belousov, and J. Peters. Learn2assemble with structured repre-sentations and search for robotic architectural construction. In Conference on Robot Learning ,pages 1401–1411. PMLR, 2022. 2[15] L. H. De Mello and A. C. Sanderson. A correct and complete algorithm for the generationof mechanical assembly sequences. In 1989 IEEE International Conference on Robotics andAutomation , pages 56–57. IEEE Computer Society, 1989. 2[16] C. Sinano ̆glu and H. R. B ̈orkl ̈ u. An assembly sequence-planning system for mechanical partsusing neural network. Assembly Automation , 2005. 2[17] Y. Huang, C. R. Garrett, I. Ting, S. Parascho, and C. T. Mueller. Robotic additive constructionof bar structures: Unified sequence and motion planning. Construction Robotics , 5(2):115–130,2021. 2[18] Y. Tian, J. Xu, Y. Li, J. Luo, S. Sueda, H. Li, K. D. Willis, and W. Matusik. Assemble themall: Physics-based planning for generalizable assembly by disassembly. ACM Transactions onGraphics (TOG) , 41(6):1–11, 2022. 2[19] R. H. Wilson and J.-C. Latombe. Geometric reasoning about mechanical assembly. ArtificialIntelligence , 71(2):371–396, 1994. 2[20] D. Halperin, J.-C. Latombe, and R. H. Wilson. A general framework for assembly planning: Themotion space approach. In Proceedings of the fourteenth annual symposium on Computationalgeometry , pages 9–18, 1998. 2[21] S. Sundaram, I. Remmler, and N. M. Amato. Disassembly sequencing using a motion plan-ning approach. In Proceedings 2001 ICRA. IEEE International Conference on Robotics andAutomation (Cat. No. 01CH37164) , volume 2, pages 1475–1480. IEEE, 2001. 2[22] X. Zhang, R. Belfer, P. G. Kry, and E. Vouga. C-space tunnel discovery for puzzle path planning.ACM Transactions on Graphics (TOG) , 39(4):104–1, 2020. 2[23] X. Huang, G. Mei, J. Zhang, and R. Abbas. A comprehensive survey on point cloud registration.arXiv preprint arXiv:2103.02690 , 2021. 2[24] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3dclassification and segmentation. In Proceedings of the IEEE conference on computer visionand pattern recognition , pages 652–660, 2017. 3[25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, andI. Polosukhin. Attention is all you need. Advances in neural information processing systems ,30, 2017. 3, 4, 6[26] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutionalneural networks. Communications of the ACM , 60(6):84–90, 2017. 3[27] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Advances in neural information processing systems , 30, 2017. 3[28] H. Zhao, L. Jiang, J. Jia, P. H. Torr, and V. Koltun. Point transformer. In Proceedings of theIEEE/CVF International Conference on Computer Vision , pages 16259–16268, 2021. 3[29] N. Hansen, Y. Akimoto, and P. Baudis. CMA-ES/pycma on Github. Zenodo,DOI:10.5281/zenodo.2559634, Feb. 2019. URL https://doi.org/10.5281/zenodo.2559634 . 610[30] J. Yang, H. Li, D. Campbell, and Y. Jia. Go-icp: A globally optimal solution to 3d icppoint-set registration. IEEE transactions on pattern analysis and machine intelligence , 38(11):2241–2254, 2015. 6, 12[31] Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng, and K. Xu. Geometric transformer for fast and robustpoint cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 11143–11152, 2022. 6[32] S. Choi, Q.-Y. Zhou, S. Miller, and V. Koltun. A large dataset of object scans. arXiv preprintarXiv:1602.02481 , 2016. 8, 17[33] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deeprepresentation for volumetric shapes. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 1912–1920, 2015. 12[34] B. Calli, A. Walsman, A. Singh, S. Srinivasa, P. Abbeel, and A. M. Dollar. Benchmarkingin manipulation research: The ycb object and model set and benchmarking protocols. arXivpreprint arXiv:1502.03143 , 2015. 12[35] Y. Wang and J. M. Solomon. Deep closest point: Learning representations for point cloudregistration. In Proceedings of the IEEE/CVF international conference on computer vision ,pages 3523–3532, 2019. 12[36] T. Luo, K. Mo, Z. Huang, J. Xu, S. Hu, L. Wang, and H. Su. Learning to group: A bottom-upframework for 3d part discovery in unseen categories. arXiv preprint arXiv:2002.06478 , 2020.13[37] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graphcnn for learning on point clouds. Acm Transactions On Graphics (tog) , 38(5):1–12, 2019. 15[38] F. Yu, K. Liu, Y. Zhang, C. Zhu, and K. Xu. Partnet: A recursive part decomposition network forfine-grained and hierarchical shape segmentation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 9491–9500, 2019. 15[39] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014. 15118 Appendix8.1 Additional Results and AnalysisAdditional Visualization. Fig. 13 and Fig. 14 show additional results on simulated and real-worlddata, respectively.Quantitative Results for Categories. Table 3 shows detailed quantitative evaluation for unseeninstances for seen categories (Chair, Lamp, Faucet) and unseen categories (Table, Display).Table 3: Quantitative results of our algorithm on different categories.Canonical Pose Random PosePrecise Part Imprecise Part Precise Part Imprecise PartCD PA SR CD PA SR CD PA SR CD PA SRChair 7.7 57.7 19.3 7.3 64.4 25.1 7.6 58.4 19.3 8.3 63.0 24.0Lamp 7.6 66.9 29.1 5.9 72.1 40.9 8.1 64.2 26.2 6.0 74.4 45.5Faucet 7.3 65.6 24.4 6.5 63.1 20.7 7.4 63.6 20.6 6.5 62.3 22.4Table 7.5 52.0 20.6 7.0 55.1 21.5 8.1 50.8 17.8 6.9 55.7 22.2Display 4.2 59.2 23.2 4.7 59.3 20.2 4.4 60.8 23.8 4.9 58.5 16.9GPAT builds creative assemblies. To fully test the generalization abilities of GPAT, we provideunseen part shapes like a banana, hammers, and forks as parts to create novel targets such as aplane. Our model predicts creative assemblies given target shapes from unseen categories and non-exact parts, as seen in Fig. 8. The shapes are taken from PartNet [6], ModelNet40 [33], and YCBdataset [34].T ar get P arts Se gmentation Assembly Figure 8: Creative Assemblies. Our model predicts creative assemblies given target shapes from unseencategories and non-exact parts. 1st row: a chair assembled with lamps as chair legs. 2nd row: a table assembledwith a plate and spoons. 3rd row: a plane assembled with a banana and hammers.Oriented bounding boxes offer a sufficient pose estimator. Once we obtain the target segmentswith GPAT, we compare with alternative methods to obtain the final poses and present the quantitativeresults in Tab. 4. The alternative methods include Go-ICP [30], DCP [35] (we take the released modelpretrained on Modelnet40 [33]). Additionally, we adapt the previous work [3] by Li et al. to ourtask. Li ei al. consider targets represented as images, and train a model to produce 2D part segmentsand another GNN-based model to regress part poses. We produce 3D segments using our pretrainedGPAT and train the GNN-based backbone proposed in DGL [4], an improved model compared tothat used in [3]. We find the heuristics based on oriented bounding boxes to produce comparative orbetter results compared to more sophisticated alternative methods. Furthermore, we provide furtheranalysis in the Appendix to show that the main bottleneck of the problem is segmentation as opposedto pose estimation.12Unseen Instance Unseen CategoryCanonical Pose Random Pose Canonical Pose Random PosePrecise Part Imprecise Part Precise Part Imprecise Part Precise Part Imprecise Part Precise Part Imprecise PartCD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SR CD PA SRGPAT-GoICP 6.7 63.9 23.7 7.8 63.5 20.0 6.8 64.9 24.6 8.0 62.5 19.7 5.9 53.9 19.7 6.5 55.7 18.6 5.3 55.8 20.4 5.9 58.6 22.1GPAT-DCP 18.2 37.3 6.2 18.6 32.1 2.1 18.7 36.0 5.5 18.4 32.0 2.8 15.4 33.1 3.4 16.7 30.1 1.0 15.3 33.4 4.4 16.9 30.5 2.3GPAT-DGL 12.1 52.1 13.0 10.8 58.4 17.2 12.2 51.1 11.5 10.5 55.8 16.2 13.1 38.4 6.9 10.5 44.8 9.6 11.5 42.3 10.3 10.4 48.5 13.6GPAT-BB (Ours) 7.6 61.6 23.2 7.2 64.8 26.0 7.8 60.8 21.7 7.8 64.3 26.0 7.1 53.4 20.1 6.6 56.3 21.7 7.6 52.2 18.8 6.9 55.6 19.8Table 4: Evaluation of the Pose Estimation Module. We find the efficient heuristics based on orientedbounding boxes to produce comparative or better results compared to more sophisticated alternative methods.Segmentation Accuracy is the Main Bottleneck. In Fig. 9, we plotted the mean success rate /part accuracy conditioned on the minimum segmentation accuracy. We find that when segmentationaccuracy approaches perfect, average success rate and part accuracy approaches 90%, while thecurrent overall numbers are around 20% and 60%, respectively. This shows that segmentationaccuracy is still the main bottleneck of our method.Figure 9: Segmentation Accuracy is the Main Bottleneck. Each point on the plot reads as ”for all the datasamples with minimum accuracy of x, the average success rate / part accuracy is y”.Optimization is prone to local minima. The optimization baseline (Opt) achieves the lowestchamfer distance (CD) in some scenarios, but its part accuracy (PA) and success rate (SR) aresignificantly lower. As seen in Fig. 10, directly optimizing the part poses to match the target oftenresult in predictions at local minima where the predicted assembly matches the contour of the target,but the assembly makes no semantic sense.GPAT is applicable to part discovery. GPAT is directly applicable to the task of part discovery,i.e., predict a part segmentation given a target [36], if we do not provide input parts. We showsome qualitative results in Fig 11 to test GPAT’s part discovery abilities. Given non-exact parts,PAT predicts accurate segmentations as usual. If we input identical blocks, which specifies thenumber of parts but provides little information about the part shapes, then GPAT predicts reasonablesegmentations with the specified number of segments. Finally, we omit the input parts, and GPATsuccessfully discovers parts in the target shape.GPAT is aware of part scales. Part assembly often involves parts that have the same geometrybut different scales, so it is necessary for a model to discriminate parts of different scales to createcorrect assemblies. As an qualitative illustration in Fig. 12, we adjust the scale of the parts that havesame geometry (the legs of chair/table), and the model correctly associates parts of different scalesto the target to build the desired assemblies.13T ar get GT Opt Ours Ours+opt Figure 10: Optimization is prone to local minima.Non-Exact Identical No Part Input Parts Segmentation Predictions Non-Exact Identical No Part Input Figure 11: Application to Part Discovery. Given non-exact matching parts (Non-Exact), identical blocks(Identical), and no part point clouds input, GPAT predicts reasonable part segmentations of the target.Se gmentation Prediction Assembly Prediction P arts T ar get Figure 12: Sensitivity to Scale . The legs of the chair/table are manually scaled, and the model correctlyassociate parts of the same shape but different sizes.148.2 Data and Training DetailsWe use Furthest Point Sampling (FPS) to sample 1,000 points for each part point cloud and 5,000points for each target point cloud. Following the previous work [37], we also zero-center all thepoint clouds, and align the principle axes of the part point clouds with the world axes using PrincipleComponent Analysis (PCA). Additionally, we similarly use axis-aligned bounding boxes to obtain3-dimensional sizes of the part, and two parts are considered geometrically equivalent if they havethe same part type as labeled by the PartNet dataset [38] and same sizes up to a small threshold.In our training, we down-sample the target point features by a factor of 10, so for each sample, weobtain 500 target point features. We use a feature dimension of 256 and we use 8 GPAT layers, withkvalues of 16 ,16,32,32,64,64,500,500. We use Adam [39] with a learning rate of 0.00004, abatch size of 36. We train for 2000 epochs in total.15Non-Exact Exact Non-Exact Exact Chair Lamp Faucet Table Display Canonical Pose Random Pose Figure 13: Qualitative Results . Chairs, lamps, and faucets are seen during the training. Tables and displaysare unseen categories. The first row of each category displays targets in black, and the second row shows ourpredictions.16Real-world Scans Predictions Figure 14: Results on Real-world Data . Non-exact parts from the same category as the target point cloud,which are teal-world scans taken from Redwood dataset [32].17 |
nyY6UgXYyfF | ADV3D: Generating Safety-Critical 3D Objectsthrough Closed-Loop SimulationJay Sarva1,3†Jingkang Wang1,2James Tu1,2Yuwen Xiong1,2Sivabalan Manivasagam1,2Raquel Urtasun1,2Waabi1University of Toronto2Brown University3jaysarva@brown.edu {jwang,jtu,yxiong,smanivasagam,urtasun }@waabi.aiAbstract: Self-driving vehicles (SDVs) must be rigorously tested on a wide rangeof scenarios to ensure safe deployment. The industry typically relies on closed-loopsimulation to evaluate how the SDV interacts on a corpus of synthetic and realscenarios and verify it performs properly. However, they primarily only test thesystem’s motion planning module, and only consider behavior variations. It is key toevaluate the full autonomy system in closed-loop, and to understand how variationsin sensor data based on scene appearance, such as the shape of actors, affect systemperformance. In this paper, we propose a framework, ADV3D, that takes realworld scenarios and performs closed-loop sensor simulation to evaluate autonomyperformance, and finds vehicle shapes that make the scenario more challenging,resulting in autonomy failures and uncomfortable SDV maneuvers. Unlike priorworks that add contrived adversarial shapes to vehicle roof-tops or roadside to harmperception only, we optimize a low-dimensional shape representation to modify thevehicle shape itself in a realistic manner to degrade autonomy performance (e.g.,perception, prediction, and motion planning). Moreover, we find that the shapevariations found with ADV3Doptimized in closed-loop are much more effectivethan those in open-loop, demonstrating the importance of finding scene appearancevariations that affect autonomy in the interactive setting. Please refer to our projectpage https://waabi.ai/adv3d/ for more results.Keywords: Closed-loop simulation, Adversarial robustness, Self-drivingDigital TwinsOriginalAdv3DPredictionFailureEgo PlanEgo PlanInaccurateDetectionOriginalAdv3D(sudden brake)Modified Digital TwinsClosed-Loop Simulation with Full AutonomyModifying Actor Shapes in Digital Twinst = 0st = 1.0st = 5.0sUncomfortableDriving ExperienceFigure 1: ADV3Dis a framework that evaluates autonomy systems in closed-loop on complex real-world scenarios, and alters the scene appearance via actor shape modification (left) to create morechallenging scenarios that cause autonomy failures such as inaccurate detections, incorrect predictionsand strong decelerations or swerving, resulting in uncomfortable and dangerous maneuvers (right).1 IntroductionMost modern autonomy systems in self-driving vehicles (SDVs) perceive the agents in the scene,forecast their future motion, and then plan a safe maneuver [ 1,2,3,4]. These tasks are typically†Work done while a research intern at Waabi.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.referred to as perception, prediction and motion planning respectively. To deploy SDVs safely, wemust rigorously test the autonomy system on a wide range of scenarios that cover the space ofsituations we might see on the real world, and ensure the system can respond appropriately. Twostrategies are employed to increase scenario coverage; either synthetic scenarios are created orsituations are retrieved from a large collection of logs captured during real-world driving.The self-driving industry then relies on closed-loop simulation of these scenarios to test the SDVin a reactive manner, so that the effects of its decisions are evaluated in a longer horizon. This isimportant, as small errors in planning can cause the SDV to brake hard or swerve. However, coveragetesting is typically restricted to the behavioral aspects of the system and only motion planning isevaluated. This falls short, as it does not consider scene appearance and ignores how perception andprediction mistakes might result in safety critical errors with catastrophic consequences, e.g., falsepositive detections or predictions might result in a hard break, false negatives could cause collision.In this paper, we are interested in building a framework for testing the full autonomy system. Wefocus on a LiDAR-based stack as it is the most commonly employed sensor in self-driving. Naivelysampling all the possible variations is computationally intractable, as we need to not only cover all thepossible behavioral scenarios but also scene appearance variations, such as actor shape, the primaryfactor for LiDAR point clouds. Towards this goal, we propose a novel adversarial attack framework,ADV3D, that searches over the worst possible actor shapes for each scenario, and attacks the fullautonomy system, including perception, prediction and motion planning (Fig. 1). This contrasts withexisting approaches that focus on building a universal perturbation to a template mesh that is addedinto all scenes as a roof-top or road-side object, and only attacks the perception system [5, 6, 7, 8].We leverage a high-fidelity LiDAR simulation system that builds modifiable digital twins of real-worlddriving snippets, enabling closed-loop sensor simulation of complex and realistic traffic scenariosat scale. Thus, when modifying the scene with a new actor shape, we can see how the autonomysystem would respond to the new sensor data as if it were in the real world (e.g., braking hard,steering abruptly) as the scenario evolves. To ensure the generated actor shapes are realistic, duringoptimization we constrain the shape to lie within a low-dimensional latent space learned over a set ofactual object shapes. This contrasts with existing approaches that search over the vertices of the meshdirectly [5, 6, 7, 8], resulting in unrealistic shapes that need to be 3D-printed to exist.In our experiments, we find ADV3Dcan generate challenging actor shapes on over 100 real-worldhighway driving scenarios and on multiple modern autonomy systems and attack configurations,resulting in perception and prediction failures and uncomfortable driving maneuvers. Importantly, weshow that by attacking in closed-loop we can discover worse situations (i.e, more powerful attacks)than attacking in open loop. We believe this finding is very significant and will spark a change in thecommunity to find scenario variations that are challenging to every aspect of autonomy.2 Related WorkSelf-Driving Systems: One of the earliest approaches to self-driving autonomy was an end-to-endpolicy learning network [ 9] that directly outputs control actuations from sensor data. This autonomyapproach has significantly evolved due to advancements in network architectures, sensor inputs, andlearning methods [ 9,10,1,11,12], but lacks interpretability. Most modern self-driving autonomysystems in industry typically break down the problem into three sequential tasks: instance-basedobject detection [ 13,14,15,16], motion forecasting [ 17,18,19,20], and planning [ 21,22,23,24].Some works conduct joint perception and prediction (P&P) first [ 25,26]. Other works leverage sharedfeature representations for simultaneous perception, prediction and planning learning [ 27,28]. Lately,interpretable neural motion planners have emerged, enabling end-to-end learning while ensuringmodularity and interpretability [ 27,2,4]. Recently, another autonomy paradigm is instance-freeautonomy that estimate spatial-temporal occupancies [ 2,29,30,4]. To demonstrate generalizability,this work evaluates an instance-based [26] and instance-free [31] autonomy system.Adversarial Robustness for Self-Driving: Research on adversarial robustness has obtained sub-stantial attention [ 32,33,34,35,36,37,38,39,40,41,42,43], particularly in safety-critical domainssuch as self-driving. Recent works primarily consider individual subsystems in self-driving. Specif-ically, researchers have focused on generating undetectable or adversarial objects [ 5,7,8,44],spoofing LiDAR points [ 45,46] and creating adversarial vehicle textures within simulation en-vironments [ 47,48]. Most recently, some works focus on prediction robustness, exploring data-2Latent rep.Generated3D ObjectLiDAR simulation Black-box optimizationAutonomy SystemLiDAR Asset librarySDVAdv Actor Simulatedpoint cloudsSDV locationClosed-Loop Simulation Adv. Objective GP-UCBPCAFigure 2: Overview of A DV3D for safety-critical 3D object generation in closed-loop. Given areal-world scenario, Adv3D modifies the shapes of selected actors, and runs LiDAR simulation andfull autonomy stack in closed-loop to optimize the shape representation with black-box optimization.driven trajectory prediction models and proposing a series of adversarial attack and defense mech-anisms [ 49,50,51]. Other works evaluate planners given the ground-truth perception informa-tion [ 52,53,54,55] or consider simplified image-based imitation learning systems [ 56,57]. Finally,some works consider the entire end-to-end autonomy stack [ 58,59] by modifying the actor trajectoriesto induce poor downstream planning performance. However, all these attacks are performed in theopen-loop setting that are not suitable for real-world autonomy testing.Closed-Loop Adversarial Attacks: The gap between open-loop and closed-loop testing for deepneural networks has been studied in [ 60], showing the results of open-loop testing does not necessarilytransfer to closed-loop environments. Existing closed-loop adversarial attacks [ 61,56,62,63] areperformed in game-engine environments like CARLA [ 64]. Specifically, [ 61] perform adversarialattacks on an image-based autonomy by painting black lines on the road. [ 56,62] attack the entireautonomy stack by modifying the trajectories of traffic participants. [ 63] take simple parameterizedscenarios and create adversarial scenarios by optimizing the parameters. However, these works onlyconsider a few hand-crafted synthetic scenarios with limited realism and diversity, which generalizespoorly to the real world [ 65]. In contrast, ADV3Dgenerates adversarial actor shapes with a realisticclosed-loop sensor simulator on over 100 real-world scenarios for a LiDAR-based autonomy.3 Generating Safety-Critical 3D Objects via Closed-Loop SimulationWe aim to extend closed-loop autonomy system testing to not only include behaviors, but also sceneappearance, such as actor shape variations. Towards this goal, we conduct black-box adversarialattacks against the full autonomy stack to modify actor shapes in real-world scenarios to cause systemfailures. To efficiently find realistic actor shapes that harm performance, we parameterize our objectshape as a low-dimensional latent code and constrain the search space to lie within the bounds ofactual vehicle shapes. Given a real-world driving scenario, we select nearby actors and modify theiractor shapes with our generated shapes. We then perform closed-loop sensor simulation and see howthe SDV interacts with the modified scenario over time and measure performance. Our adversarialobjective consists of perception, prediction, and planning losses to find errors in every part of theautonomy. We then conduct black-box optimization per scenario to enable testing of any autonomysystem at scale. Fig. 2 shows an overview of our approach.In what follows, we define the closed-loop simulation setting and our attack formulation (Sec 3.1).We then describe how we build realistic scenes from real world data, parameterize the adversarialactor geometries, and carry out realistic LiDAR simulation for autonomy testing. (Sec 3.2). Finally,we present the adversarial objective and the black-box optimization approach for ADV3D(Sec 3.3).3.1 Problem FormulationClosed-loop Autonomy Evaluation: We now review closed-loop evaluation of an autonomy sys-tem. A traffic scenario Stat snapshot time tis composed of a static background BandNactors:St={{A1t,A2t,···,ANt},B}. Each actor Atconsists of {G, ξt}, where Gandξtrepresent theactor’s geometry and its pose at time t. Given Stand the SDV location Et, the simulator ψgen-erates sensor data according to the SDV’s sensor configuration. The autonomy system Fthenconsumes the sensor data, generates intermediate outputs Osuch as detections and the planned3Digital TwinsSimulated LiDARDigital TwinsSimulated LiDARDigital TwinsSimulated LiDARFigure 3: Reconstructed digital twins and simulated LiDAR for real-world highway scenarios .trajectory, and executes a driver command D(e.g., steering, acceleration) via the following mappingF:ψ(S,E)→ D × O , The simulator then will update the SDV location given the driver commandEt,D → E t+1as well as update the actor locations to generate the scenario snapshot St+1. This loopcontinues and we can observe the autonomy interacting with the scenario over time.Adversarial Shape Attacks in Closed Loop: Given our closed-loop formulation, we can nowselect actors in the scene to modify their shapes and optimize to find autonomy failures. For simplicity,we describe our formulation with a single modified actor, but our approach is general and we alsodemonstrate modifying multiple actors in Sec 4. Let Giadvdenote the safety-critical 3D objectgeometry for the interactive actor Ai. In this paper, our goal is to generate Giadvthat is challengingto the autonomy system F. We define a cost function Ctthat takes the current scene and autonomyoutputs to determine the autonomy performance at time t. We accumulate the cost over time tomeasure the overall closed-loop performance, resulting in the following objective:Giadv= arg maxGiTXt=1CtSt,Feψ(St,Gi,Et), (1)where the sensor simulator eψtakes the original scenario Stbut replaces the geometry of actor Aiwith the optimized shape Gi, and simulates new sensor data.3.2 Realistic Sensor Simulation and Adversarial ShapesWe now describe how we build digital twins from real world scenarios to perform realistic closed-loopsensor simulation (via eψ) for autonomy testing. We then explain how we parameterize our actorshapeGito enable realistic and efficient optimization.Building Digital Twins for LiDAR Simulation: Data-driven sensor simulation [ 66,67,68,69,70,71] has witnessed great success in recent years. Following [ 72,73], we leverage real-worldLiDAR data and object annotations to build aggregated surfel meshes (textured by per-point intensityvalue) for the virtual world. We manually curated a set of actor meshes that have complete andclean geometry, together with CAD assets purchased from TurboSquid [ 74] to create a rich assetlibrary to model actor shapes for benign actors in the scene. During simulation, we query the actorgeometries from the curated set based on the actor class and bounding box. We can then place theactor geometries according to their pose ξtin the scenario snapshot at time tbased on St, and thenperform primary raycasting [ 72,73,75,76] based on the sensor configuration and SDV location togenerate simulated LiDAR point clouds for the autonomy system via eψ. Fig. 3 shows examples ofreconstructed digital twins and simulated LiDAR point clouds.Adversarial Shape Representation: With our digital twin representation that reconstructs the 3Dworld, we can now model the scenario snapshot St. We now select an actor to modify its shape.To ensure the actor shape affects autonomy performance, we select the closest actor in front of theSDV as Aiand modify its shape Gito be adversarial. To ensure the actor shape is realistic and isnot a contrived and non-smooth mesh, we parameterize the geometry Gusing a low-dimensionalrepresentation zwith realistic constraints. Specifically, for vehicle actors, the primary actor in drivingscenes, we take inspiration from [ 77] and learn a shape representation over a large set of CAD vehiclemodels that represent a wide range of vehicles (e.g., city cars, sedans, vans, pick-up trucks). For eachCAD vehicle shape, we first compute a volumetric truncated signed distance field (SDF) to obtaina dense representation. We then apply principal component analysis [ 78] (PCA) over the flattenedvolumetric SDF Φ∈R|L|×1of the whole CAD dataset to obtain the latent representation:z=W⊤(Φ−μ),G(z) = MarchingCubes ( W·z+μ) (2)4where μ∈R|L|×1is the mean volumetric SDF for the meshes and Wis the top Kprinciplecomponents. We then extract the explicit mesh using marching cubes [ 79]. Note that each dimensioninzcontrols different properties of the actors ( e.g., scale, width, height) in a controllable and realisticmanner [ 77,80]. To ensure during optimization the latent code zlies within the set of learned actorshapes and is realistic, we normalize the latent code to lie within the bounds of the minimum andmaximum values of each latent dimension over the set of CAD models.3.3 Adversarial 3D Shape OptimizationWith the simulation system eψand a black-box autonomy system Funder test, we can performclosed-loop testing (Sec.3.1). Given the selected actor Aiand low-dimensional shape representationz, the next step is to optimize zsuch that the overall autonomy performance drops significantly. Wenow introduce the adversarial objective and the search algorithms to produce safety-critical shapes.Adversarial Objective: We consider an adversarial objective that accounts for the entire autonomystack, aiming to concurrently reduce the performance of each module. Taking instance-based modularautonomy as an example, the adversarial objective Ctcombines the perception loss ldet, predictionlosslpred and planning comfort cost cplan to measure the overall autonomy performance. Weincrease perception errors by decreasing the confidence / Intersection of Union (IoU) for the truepositive (TP) detections and increasing the confidence / IoU for false positive (FP) proposals. Formotion forecasting, we compute the averaged displacement error (ADE) for multi-modal trajectorypredictions. As the modified actor shape can also affect the perception and motion forecasting ofother actors, we consider the average objective for all actors within the region of interest across allthe frames. In terms of planning, we calculate two costs that measures the comfort (jerk and lateralacceleration) of the driving plan at each snapshot time t. Formally, we haveCt=ltdet+λpredltpred+λplanctplan, (3)ltdet=−αXTPIoU(Bt,ˆBt)·Conf( ˆBt) +βXFP(1−IoU(Bt,ˆBt))·Conf( ˆBt), (4)ltpred=1KKXi=1HXh=1gt,hj−pt,hj2, ctplan=ctjerk+ctlat, (5)where Bt, Btare the ground truth and detected bounding boxes at time tandα, β are the coefficientsto balance TP and FP objectives. gt,hj, pt,hjare the h-th ground truth and predicted waypoints foractor jat time tandHis the prediction horizon. Lastly, ctjerkandctlatrepresent the jerk (m/s3)andlateral acceleration (m/s2)cost at time t. We aggregate these costs over timePTt=1Ctto get the finalclosed-loop evaluation cost. Note that our approach is general and can find challenging actor shapesfor any autonomy system. We detail in supp. the objective function for an instance-free autonomy.Black-box Search Algorithm: We apply black-box optimization since we aim to keep ADV3Dgeneric to different modern autonomy systems (including non-differentiable modular autonomysystems). Inspired by existing works [ 56,58], we adopt Bayesian Optimization [ 81,82] (BO) asthe search algorithm with Upper Confidence Bound [ 83] (UCB) as the acquisition function. Sincethe adversarial landscape is not locally smooth, we use standard Gaussian process with a Mat ́ernkernel. We also compare BO with the other popular search algorithms including grid search [ 84](GS), random search [ 85,86,87] (RS) and blend search (BS) [ 88]. We also compare to a baselinethat conducts brute-force search (BF) over the curated asset library.4 ExperimentsWe showcase applying ADV3Dto generate safety-critical 3D actor shapes for autonomy systemtesting on real-world scenarios. We first introduce the experimental setting in Sec 4.1. Then inSec 4.2, we demonstrate the importance of adversarial optimization through closed-loop simulationfor the whole autonomy stack. We further show the realism of our adversarial shape representation.Finally, we investigate different attack configurations in closed-loop including attacks with multipleactors or reactive actors, and benchmark various black-box optimization algorithms.5Perception and Prediction Planning ExecutionClosed-Loop TestAP / Recall (%) ↑ ADE↓ Planning Comfort ↓ Driving Comfort ↓AP Recall minADE meanADE Lat. ( m/s2) Jerk ( m/s3) Lat. ( m/s2) Jerk ( m/s3)Autonomy-A: Instance-based [26] + [24]Original 88.2 89.4 2.14 4.90 0.203 0.336 0.194 0.331Adv. open-loop 88.3 89.8 2.08 4.87 0.214 0.378 0.207 0.337Adv. closed-loop 80.1 84.8 2.40 5.09 0.263 0.427 0.265 0.401Closed-Loop TestOccupancy (%) ↑ Flow Grounded ↑ Planning Comfort ↓ Driving Comfort ↓mAP Soft-IoU mAP Soft-IoU Lat. ( m/s2) Jerk ( m/s3) Lat. ( m/s2) Jerk ( m/s3)Autonomy-B: Instance-free [31] + [24]Original 83.1 50.4 94.6 61.2 0.256 0.319 0.263 0.315Adv. open-loop 85.7 53.2 96.3 65.5 0.260 0.451 0.279 0.424Adv. closed-loop 78.8 45.9 90.1 55.7 0.302 0.456 0.308 0.431Table 1: Evaluation of adversarial objects in the closed-loop setting.Adv. Closed-LoopOriginalAdv. Open-LoopUncommon shape (short and wide pickup truck) leads to degraded detectionAdv. Closed-LoopOriginalAdv. Open-LoopProblematic perception and prediction lead to uncomfortable drivingFigure 4: Qualitative examples of adversarial shape generation in closed loop vs open loop. Wehighlight the modified actors using orange bboxes and show generated adversarial shapes aside. Theperception and prediction failures are pointed out by green and blue arrows. Specifically, the greenarrows point to the green detected bounding boxes, and blue arrows point to the predicted light bluetrajectories. The numbers on top of green predicted bounding boxes are the detection confidencescores. The numbers beside the pink ego-truck denote the current velocity and acceleration.4.1 Experimental SetupDataset: We evaluate our method on a real-world self-driving dataset HighwayScenarios , whichcontains 100 curated driving snippets captured on a US highway each with a duration of 20 seconds(200 frames, sampled at 10hz). The dataset is collected with multiple LiDARs that we then annotatespatial-temporal actor tracks and leverage to build the digital twin backgrounds and reconstructedassets. It covers a wide variety of map layouts, traffic densities, and behaviors.Autonomy Systems: We evaluate two state-of-the-art interpretable multi-LiDAR based autonomysystems. The first (Autonomy-A) uses an instance-based joint perception and prediction (P&P)model [ 26] which outputs actor bounding boxes and trajectories in birds-eye-view (BEV). Thesecond (Autonomy-B) uses an instance-free (occupancy-based) P&P model [ 31] that predictsthe implicit BEV occupancy and flow for queried spatial points at current (detection) and future(prediction) timestamps. Both systems use a sampling-based motion planner [ 24] which sampleslane-relative trajectories and chooses the best trajectory with the lowest combined safety and comfortcosts. Specifically, We report evaluation results for two autonomy systems in Table 1. For the restexperiments, we only study the instance-based Autonomy-A.Metrics: To investigate the effectiveness of ADV3Don downstream tasks, we measure variousmetrics to evaluate the module-level performance, including detection, prediction and planning. ForAutonomy-A, we use Average Precision (AP) and Recall to measure the detection performance.For prediction, we measure Average Displacement Error (ADE) to quantify the performance oftrajectory forecasting. For Autonomy-B, we follow [ 31] to report mAP and Soft-IoU to measure theperformance of occupancy and flow prediction. All metrics are averaged across the simulation horizon(T= 5s) within ROI. We report the planning comfort metrics including lateral acceleration (Lat.)and Jerk, which assess the smoothness of the proposed ego plan. To evaluate how the SDV executesin the closed loop, we evaluate system-level metrics driving comfort ( i.e.,lateral acceleration andjerk on the executed 5s SDV trajectory). Finally, we report Jensen–Shannon divergence [ 89] (JSD) tomeasure the realism of the generated actor shapes.6#IDPerception Prediction Planning AP↑ Recall ↑ minADE ↓meanADE ↓ Lat.↓ Jerk↓PtltdetPtltpredPtctplan (%,@0.5) (%,@0.5)L2error L2error (m/s2) (m/s3)Original 88.7•89.4•2.51•4.99•0.261•0.294•M1 ✓ 69.6•71.4•1.97•5.02•0.239•0.310•M2 ✓ 83.1•89.1•2.92•6.34•0.254•0.412•M3 ✓ 86.7•88.3•2.94•6.03•0.324•0.434•Ours ✓ ✓ ✓ 75.4•76.4•2.82•6.21•0.411•0.410•Table 2: Adversarial optimization for the full autonomy system. We compare ADV3Dwith threebaselines that each only attack one module: detection ( M1), prediction ( M2) and planning ( M3).Each baseline adopts the same pipeline as ADV3Dexcept the adversarial objective is changed.ADV3Dgenerates actor shapes that are challenging to all downstream modules. Interestingly, itresults in more uncomfortable driving maneuvers ( i.e., worst lateral acceleration). We mark themethods with best performances using gold •, silver•, and bronze •medals.AlgorithmsAP↑ Recall ↑ minADE ↓ Jerk↓JSD(%,@0.5) (%,@0.5)L2error (m/s3)Original 98.7 99.6 4.70 0.090 –VD:0.05m 98.7 99.6 5.07 0.090 0.061VD:0.1m 98.7 99.6 5.73 0.090 0.125VD:0.1m 98.7 99.6 5.73 0.090 0.125VD:0.2m 98.7 99.6 5.78 0.090 0.285VD:0.5m 80.2 83.0 6.00 0.090 0.688VD:1.0m 46.6 49.8 7.20 0.163 0.796Adv3D (ours) 50.3 55.8 7.87 0.111 0.175Table 3: Compare with vertex deformation (VD).Vertex perturb diagramAdv3DVD (5 cm)VD (10 cm)VD (20 cm)VD (100 cm)VD (50 cm)Figure 5: Qualitative comparisons with VD.4.2 Experimental ResultsADV3D finds challenging actor shapes: We report autonomy performance metrics on the 100traffic scenarios with and without our adversarial shape attack in Table 1. For each scenario, we setthe attack search query budget to 100 queries. Our approach can degrade the subsystem performanceand execution performance significantly. We further provide qualitative examples of optimizedadversarial shapes together with autonomy outputs in Fig. 4. In Fig. 4 left, ADV3Dsuccessfullycreates a uncommon actor shape (short and wide truck) that degrades the detection performance (lowconfidence or mis-detection). Fig. 4 right shows another example where the closed-loop attack findsa tiny city-car causes inaccurate detection and prediction for an actor behind the SDV and resultsin the SDV applying a strong deceleration. Note that the modified actor shape alters the simulatedLiDAR such that perception and prediction outputs are harmed even for other actors in the scene.Importance of Closed-Loop Simulation: We also compare our approach, where we find chal-lenging actor shapes during closed-loop autonomy evaluation, against optimizing actor shapes inthe open-loop setting. In the open-loop shape attack, the ego vehicle follows the original trajectoryin the recorded log, and we optimize the actor shape with the same objective in Eq. 3. We thentest the optimized actor shapes in closed-loop simulation where the ego vehicle is controlled bythe autonomy model. Generating adversarial objects in closed-loop simulation yields substantiallyworse performance compared to open-loop. This indicates that it is insufficient to study adversarialrobustness in open-loop as the attacks do not generalize well when the SDV is reactive. In Fig. 4, theoptimized shapes in the open-loop attack do not harm autonomy performance significantly.Attacking Full Autonomy Stack: Our adversarial objective takes the full autonomy stack intoaccount. To demonstrate the importance of attacking the full system, we choose 10 logs fromHighwayScenario and compare with three baselines inspired by existing works that each only attackone module (detection: [ 5,7], prediction: [ 49,50], and planning: [ 59,90]) for Autonomy-A. To reduceperformance variability, for each scenario, we perform 3 attacks, each with a budget of 100 queries,and modify the closest 3 actors in front of the SDV individually and report the worst-case performance.As shown in Table 2, attacking each downstream module produces challenging objects that are onlyrisky to that module. In contrast, our model effectively balances all tasks to generate worst-case 3Dobjects that challenge the entire autonomy stack, serving as a holistic tool for identifying potentialsystem failures. We report additional objective combinations in supp.Latent Asset Representation: We also demonstrate that optimizing with our latent asset represen-tation is more realistic than prior-works that perform per-vertex deformation. We select a scenariowhere the modified actor mesh from ADV3Dhas a strong attack against the autonomy. We initialize7Perception ↑(%) Prediction ↓ Planning ↓IOU / Confidence AP / Recall ( @0.5) ADE Planning ComfortIoU Conf. AP Recall minADE meanADE Lat. ( m/s2) Jerk ( m/s3)Original 69.2 88.8 92.7 88.7 2.51 4.99 0.261 0.294m= 1 62.9 71.1 76.5 79.3 2.83 6.33 0.312 0.405m= 2 61.7 70.1 71.8 75.1 2.96 6.30 0.352 0.460m= 3 55.7 77.0 70.2 73.7 3.37 6.27 0.360 0.475m= 5 50.1 70.9 64.5 67.7 3.39 6.34 0.377 0.535Table 4: Adversarial optimization with multiple actors. mdenotes the number of modified actors.AlgorithmsAP↑ minADE ↓ Jerk↓#QueryGPU(%,@0.5)L2error (m/s3) HourOriginal 98.7 4.70 0.090 – 0.2GS [84] 52.7 5.98 0.090 243 47.8RS [85, 86, 87] 52.4 6.10 0.090 500 98.3BS [88] 52.4 6.09 0.090 100 19.6BO [82] 50.3 7.87 0.111 100 19.6Brute-Force 52.7 7.91 0.090 746 146.6Table 5: Different black-box search algorithms.SettingsAP↑ minADE ↓ Jerk↓(%,@0.5)L2error (m/s3)Original 98.7 5.91 0.168Adv. open-loop 59.3 5.23 0.140Adv. closed-loop 52.1 6.86 0.335Table 6: Attacks with reactive actors.the mesh by decoding the latent zto generate a mean shape with 500 faces and 252 vertices, and addperturbations to vertices. The perturbations to the vertex coordinates are constrained by an l∞norm,and we experiment with 4 variations [0.1m,0.2m,0.5m,1.0m]. Results in Tab. 3 show that veryloose constraints of 1mare needed to achieve the strong attack performance. However, the generatedassets with loose constraints are unrealistic compared to ADV3D-generated meshes. We hypothesizethis is because it is difficult to optimize such high-dimensional vertex-based shape representations, asblack-box optimization methods suffer from curse of dimensionality.Multiple Adversarial Actors: In Table 4, we study how ADV3Dcan easily scale to modify theshapes of multiple actors in the scenario. We use the same 10 logs as Table 2. We set m=[1,2,3,5]and sample the closest actors in front of the ego vehicle. Modifying multiple actors’ shapessimultaneously yields stronger adversarial attacks.Attack Configurations: We benchmark other black-box search algorithms and brute-force searchin Table 5. For brute-force baseline, we iterate over all shapes in the asset library and select theone that results in the strongest attack. Table 5 shows that Bayesian Optimization (BO) leads tothe strongest adversarial attack while also using the least compute. We further investigate differentattack configurations on one selected log in Table 6. To increase the realism of our setting, we adoptreactive actors by using a traffic model [ 91] to control their behaviors. As shown in Table 6, ADV3Dalso generates challenging actor shapes in this setting and the results demonstrate the importance ofclosed-loop simulation.5 Limitations and ConclusionADV3D’s main limitation is that we do not optimize the actor behaviors like prior work [ 58,59,56]to allow for more diverse adversarial scenario generation. Moreover, how to incorporate ADV3D-generated safety-critical objects to create new scenarios for robust training remains future study.While our shapes are more realistic than prior work, we also observe occassionally convergenceto shapes that have artifacts or oblong wheels. Better shape representations (including for non-vehicle classes) and optimization approaches (e.g., multi-objective optimization), can help createhigher-fidelity and more diverse adversarial objects more efficiently.In this paper, we present a closed-loop adversarial framework to generate challenging 3D shapes forthe full autonomy stack. Given a real-world traffic scenario, our approach modifies the geometriesof nearby interactive actors, then run realistic LiDAR simulation and modern autonomy models inclosed loop. Extensive experiments on two modern autonomy systems highlight the importance ofperforming adversarial attacks through closed-loop simulation. We hope this work can provide usefulinsights for future adversarial robustness study in the closed-loop setting.8AcknowledgementWe sincerely thank the anonymous reviewers for their insightful suggestions. We would like to thankYun Chen, Andrei B ˆarsan and Sergio Casas for the feedback on the early results and proofreading.We also thank the Waabi team for their valuable assistance and support.References[1]A. Kendall, J. Hawke, D. Janz, P. Mazur, D. Reda, J.-M. Allen, V .-D. Lam, A. Bewley, andA. Shah. Learning to drive in a day. arXiv preprint arXiv:1807.00412 , 2018.[2]A. Sadat, S. Casas, M. Ren, X. Wu, P. Dhawan, and R. Urtasun. Perceive, predict, and plan:Safe motion planning through interpretable semantic representations. CoRR , abs/2008.05930,2020.[3]M. H. Danesh, P. Cai, and D. Hsu. Leader: Learning attention over driving behaviors forplanning under uncertainty. In Conference on Robot Learning , pages 199–211. PMLR, 2023.[4]Y . Hu, J. Yang, L. Chen, K. Li, C. Sima, X. Zhu, S. Chai, S. Du, T. Lin, W. Wang, et al. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 17853–17862, 2023.[5]Y . Cao, C. Xiao, D. Yang, J. Fang, R. Yang, M. Liu, and B. Li. Adversarial objects againstlidar-based autonomous driving systems. arXiv preprint arXiv:1907.05418 , 2019.[6]Y . Cao, C. Xiao, B. Cyr, Y . Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao.Adversarial sensor attack on lidar-based perception in autonomous driving. In Proceedings of the2019 ACM SIGSAC conference on computer and communications security , pages 2267–2281,2019.[7]J. Tu, M. Ren, S. Manivasagam, M. Liang, B. Yang, R. Du, F. Cheng, and R. Urtasun. Physicallyrealizable adversarial examples for lidar object detection. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 13716–13725, 2020.[8]J. Tu, H. Li, X. Yan, M. Ren, Y . Chen, M. Liang, E. Bitar, E. Yumer, and R. Urtasun. Exploringadversarial robustness of multi-sensor perception systems in self driving. arXiv preprintarXiv:2101.06784 , 2021.[9] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In NIPS , 1989.[10] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXiv preprintarXiv:1604.07316 , 2016.[11] F. Codevilla, M. Miiller, A. L ́opez, V . Koltun, and A. Dosovitskiy. End-to-end driving viaconditional imitation learning. In ICRA , 2018.[12] A. Hu, G. Corrado, N. Griffiths, Z. Murez, C. Gurau, H. Yeo, A. Kendall, R. Cipolla, andJ. Shotton. Model-based imitation learning for urban driving. Advances in Neural InformationProcessing Systems , 35:20703–20716, 2022.[13] B. Yang, W. Luo, and R. Urtasun. Pixor: Real-time 3d object detection from point clouds.InProceedings of the IEEE conference on Computer Vision and Pattern Recognition , pages7652–7660, 2018.[14] A. H. Lang, S. V ora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom. Pointpillars: Fast encodersfor object detection from point clouds. In Proceedings of the IEEE/CVF conference on computervision and pattern recognition , pages 12697–12705, 2019.[15] T. Yin, X. Zhou, and P. Krahenbuhl. Center-based 3d object detection and tracking. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages11784–11793, 2021.9[16] S. Shi, C. Guo, L. Jiang, Z. Wang, J. Shi, X. Wang, and H. Li. Pv-rcnn: Point-voxel feature setabstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 10529–10538, 2020.[17] J. Gao, C. Sun, H. Zhao, Y . Shen, D. Anguelov, C. Li, and C. Schmid. Vectornet: Encodinghd maps and agent dynamics from vectorized representation. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 11525–11533, 2020.[18] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone. Trajectron++: Dynamically-feasibletrajectory forecasting with heterogeneous data. In Computer Vision–ECCV 2020: 16th EuropeanConference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVIII 16 , pages 683–700.Springer, 2020.[19] W. Zeng, M. Liang, R. Liao, and R. Urtasun. Lanercnn: Distributed representations for graph-centric motion forecasting. In 2021 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 532–539. IEEE, 2021.[20] A. Cui, S. Casas, K. Wong, S. Suo, and R. Urtasun. Gorela: Go relative for viewpoint-invariantmotion forecasting. arXiv preprint arXiv:2211.02545 , 2022.[21] M. McNaughton, C. Urmson, J. M. Dolan, and J.-W. Lee. Motion planning for autonomousdriving with a conformal spatiotemporal lattice. In 2011 IEEE International Conference onRobotics and Automation , pages 4889–4895. IEEE, 2011.[22] H. Fan, F. Zhu, C. Liu, L. Zhang, L. Zhuang, D. Li, W. Zhu, J. Hu, H. Li, and Q. Kong. Baiduapollo em motion planner. arXiv preprint arXiv:1807.08048 , 2018.[23] N. Rhinehart, K. M. Kitani, and P. Vernaza. R2p2: A reparameterized pushforward policy fordiverse, precise generative path forecasting. In Proceedings of the European Conference onComputer Vision (ECCV) , pages 772–788, 2018.[24] A. Sadat, M. Ren, A. Pokrovsky, Y .-C. Lin, E. Yumer, and R. Urtasun. Jointly learnable behaviorand trajectory planning for self-driving vehicles. IROS , 2019.[25] W. Luo, B. Yang, and R. Urtasun. Fast and furious: Real time end-to-end 3d detection, trackingand motion forecasting with a single convolutional net. In Proceedings of the IEEE conferenceon Computer Vision and Pattern Recognition , pages 3569–3577, 2018.[26] M. Liang, B. Yang, W. Zeng, Y . Chen, R. Hu, S. Casas, and R. Urtasun. Pnpnet: End-to-endperception and prediction with tracking in the loop. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 11553–11562, 2020.[27] W. Zeng, W. Luo, S. Suo, A. Sadat, B. Yang, S. Casas, and R. Urtasun. End-to-end interpretableneural motion planner. In CVPR , 2019.[28] W. Zeng, S. Wang, R. Liao, Y . Chen, B. Yang, and R. Urtasun. Dsdnet: Deep structuredself-driving network. CoRR , abs/2008.06041, 2020.[29] S. Casas, A. Sadat, and R. Urtasun. Mp3: A unified model to map, perceive, predict and plan.InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition ,pages 14403–14412, 2021.[30] R. Mahjourian, J. Kim, Y . Chai, M. Tan, B. Sapp, and D. Anguelov. Occupancy flow fieldsfor motion forecasting in autonomous driving. IEEE Robotics and Automation Letters , 7(2):5639–5646, 2022.[31] B. Agro, Q. Sykora, S. Casas, and R. Urtasun. Implicit occupancy flow fields for perceptionand prediction in self-driving. In CVPR , 2023.[32] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples.arXiv preprint arXiv:1412.6572 , 2014.[33] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning modelsresistant to adversarial attacks. arXiv preprint arXiv:1706.06083 , 2017.10[34] T. B. Brown, D. Man ́e, A. Roy, M. Abadi, and J. Gilmer. Adversarial patch. CoRR ,abs/1712.09665, 2017.[35] A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok. Synthesizing robust adversarial examples. InICML , 2018.[36] J. Wang, T. Zhang, S. Liu, P.-Y . Chen, J. Xu, M. Fardad, and B. Li. Adversarial attack generationempowered by min-max optimization. Advances in Neural Information Processing Systems , 34:16020–16033, 2021.[37] K. Xu, G. Zhang, S. Liu, Q. Fan, M. Sun, H. Chen, P. Chen, Y . Wang, and X. Lin. Evadingreal-time person detectors by adversarial t-shirt. CoRR , abs/1910.11099, 2019.[38] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, andD. Song. Robust physical-world attacks on deep learning visual classification. In CVPR , 2018.[39] J. Wang, Y . Liu, and B. Li. Reinforcement learning with perturbed rewards. In Proceedings ofthe AAAI conference on artificial intelligence , number 04, pages 6202–6209, 2020.[40] J. Wang, H. Guo, Z. Zhu, and Y . Liu. Policy learning using weak supervision. Advances inNeural Information Processing Systems , 34:19960–19973, 2021.[41] C. Xiao, D. Yang, B. Li, J. Deng, and M. Liu. Meshadv: Adversarial meshes for visualrecognition. In CVPR , 2019.[42] J. Tu, T. Wang, J. Wang, S. Manivasagam, M. Ren, and R. Urtasun. Adversarial attacks onmulti-agent communication. In Proceedings of the IEEE/CVF International Conference onComputer Vision , pages 7768–7777, 2021.[43] N. Vadivelu, M. Ren, J. Tu, J. Wang, and R. Urtasun. Learning to communicate and correctpose errors. In Conference on Robot Learning , pages 1195–1210. PMLR, 2021.[44] H.-T. D. Liu, M. Tao, C.-L. Li, D. Nowrouzezahrai, and A. Jacobson. Beyond pixel norm-balls:Parametric adversaries using an analytically differentiable renderer. In International Conferenceon Learning Representations , 2019.[45] Y . Cao, C. Xiao, B. Cyr, Y . Zhou, W. Park, S. Rampazzi, Q. A. Chen, K. Fu, and Z. M. Mao.Adversarial sensor attack on lidar-based perception in autonomous driving. In CCS, 2019.[46] J. Sun, Y . Cao, Q. A. Chen, and Z. M. Mao. Towards robust lidar-based perception in autonomousdriving: General black-box adversarial sensor attack and countermeasures. In USENIX SecuritySymposium , 2020.[47] Y . Zhang, H. Foroosh, P. David, and B. Gong. CAMOU: Learning physical vehicle camouflagesto adversarially attack detectors in the wild. In ICLR , 2019.[48] T. Wu, X. Ning, W. Li, R. Huang, H. Yang, and Y . Wang. Physical adversarial attack on vehicledetector in the carla simulator. CoRR , abs/2007.16118, 2020.[49] Y . Cao, C. Xiao, A. Anandkumar, D. Xu, and M. Pavone. Advdo: Realistic adversarial attacksfor trajectory prediction. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv,Israel, October 23–27, 2022, Proceedings, Part V , pages 36–52. Springer, 2022.[50] Q. Zhang, S. Hu, J. Sun, Q. A. Chen, and Z. M. Mao. On adversarial robustness of trajectoryprediction for autonomous vehicles. In Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition , pages 15159–15168, 2022.[51] Y . Cao, D. Xu, X. Weng, Z. Mao, A. Anandkumar, C. Xiao, and M. Pavone. Robust trajectoryprediction against adversarial attacks. In Conference on Robot Learning , pages 128–137. PMLR,2023.[52] B. Chen and L. Li. Adversarial evaluation of autonomous vehicles in lane-change scenarios.CoRR , abs/2004.06531, 2020.11[53] A. Wachi. Failure-scenario maker for rule-based agent using multi-agent adversarial reinforce-ment learning and its application to autonomous driving. In IJCAI , 2019.[54] W. Ding, M. Xu, and D. Zhao. Learning to collide: An adaptive safety-critical scenariosgenerating method. CoRR , abs/2003.01197, 2020.[55] M. Klischat and M. Althoff. Generating critical test scenarios for automated vehicles withevolutionary algorithms. In IV, 2019.[56] Y . Abeysirigoonawardena, F. Shkurti, and G. Dudek. Generating adversarial driving scenariosin high-fidelity simulators. In ICRA , 2019.[57] J. Norden, M. O’Kelly, and A. Sinha. Efficient black-box assessment of autonomous vehiclesafety. arXiv preprint arXiv:1912.03618 , 2019.[58] J. Wang, A. Pun, J. Tu, S. Manivasagam, A. Sadat, S. Casas, M. Ren, and R. Urtasun. Advsim:Generating safety-critical scenarios for self-driving vehicles. In CVPR , 2021.[59] D. Rempe, J. Philion, L. J. Guibas, S. Fidler, and O. Litany. Generating useful accident-pronedriving scenarios via a learned traffic prior. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 17305–17315, 2022.[60] F. U. Haq, D. Shin, S. Nejati, and L. Briand. Can offline testing of deep neural networksreplace their online testing? a case study of automated driving systems. Empirical SoftwareEngineering , 26(5):90, 2021.[61] A. Boloor, K. Garimella, X. He, C. Gill, Y . V orobeychik, and X. Zhang. Attacking vision-basedperception in end-to-end autonomous driving models. Journal of Systems Architecture , 110:101766, 2020.[62] N. Hanselmann, K. Renz, K. Chitta, A. Bhattacharyya, and A. Geiger. King: Generating safety-critical driving scenarios for robust imitation via kinematics gradients. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings,Part XXXVIII , pages 335–352. Springer, 2022.[63] S. Ramakrishna, B. Luo, C. B. Kuhn, G. Karsai, and A. Dubey. Anti-carla: An adversarial testingframework for autonomous vehicles in carla. In 2022 IEEE 25th International Conference onIntelligent Transportation Systems (ITSC) , pages 2620–2627. IEEE, 2022.[64] A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V . Koltun. Carla: An open urban drivingsimulator. Conference on robot learning , 2017.[65] S. R. Richter, H. A. AlHaija, and V . Koltun. Enhancing photorealism enhancement. IEEETransactions on Pattern Analysis and Machine Intelligence , 45(2):1700–1715, 2022.[66] Z. Yang, Y . Chen, J. Wang, S. Manivasagam, W.-C. Ma, A. J. Yang, and R. Urtasun. Unisim: Aneural closed-loop sensor simulator. In CVPR , 2023.[67] Y . Chen, F. Rong, S. Duggal, S. Wang, X. Yan, S. Manivasagam, S. Xue, E. Yumer, andR. Urtasun. Geosim: Realistic video simulation via geometry-aware composition for self-driving. CVPR , 2021.[68] J. Wang, S. Manivasagam, Y . Chen, Z. Yang, I. A. B ˆarsan, A. J. Yang, W.-C. Ma, and R. Urtasun.Cadsim: Robust and scalable in-the-wild 3d reconstruction for controllable sensor simulation. In6th Annual Conference on Robot Learning , 2022. URL https://openreview.net/forum?id=Mp3Y5jd7rnW .[69] Z. Yang, S. Manivasagam, Y . Chen, J. Wang, R. Hu, and R. Urtasun. Reconstructing objectsin-the-wild for realistic sensor simulation. In ICRA , 2023.[70] J. Y . Liu, Y . Chen, Z. Yang, J. Wang, S. Manivasagam, and R. Urtasun. Neural scene rasterizationfor large scene rendering in real time. In The IEEE International Conference on ComputerVision (ICCV) , 2023.12[71] S. Manivasagam, I. A. B ˆarsan, J. Wang, Z. Yang, and R. Urtasun. Towards zero domain gap: Acomprehensive study of realistic lidar simulation for autonomy testing. In ICCV , 2023.[72] S. Manivasagam, S. Wang, K. Wong, W. Zeng, M. Sazanovich, S. Tan, B. Yang, W.-C. Ma, andR. Urtasun. Lidarsim: Realistic lidar simulation by leveraging the real world. In CVPR , 2020.[73] Z. Yang, Y . Chai, D. Anguelov, Y . Zhou, P. Sun, D. Erhan, S. Rafferty, and H. Kretzschmar.Surfelgan: Synthesizing realistic sensor data for autonomous driving. CVPR , 2020.[74] TurboSquid. https://www.turbosquid.com , Access date: 2023-05-17.[75] J. Fang, D. Zhou, F. Yan, T. Zhao, F. Zhang, Y . Ma, L. Wang, and R. Yang. Augmented lidarsimulator for autonomous driving. IEEE Robotics and Automation Letters , 2020.[76] J. Fang, X. Zuo, D. Zhou, S. Jin, S. Wang, and L. Zhang. Lidar-aug: A general rendering-basedaugmentation framework for 3d object detection. In CVPR , 2021.[77] F. Engelmann, J. St ̈uckler, and B. Leibe. SAMP: shape and motion priors for 4d vehiclereconstruction. In WACV , 2017.[78] K. Pearson. Liii. on lines and planes of closest fit to systems of points in space. The London,Edinburgh, and Dublin philosophical magazine and journal of science , 2(11):559–572, 1901.[79] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3d surface constructionalgorithm. ACM siggraph computer graphics , 21(4):163–169, 1987.[80] F. Lu, Z. Liu, X. Song, D. Zhou, W. Li, H. Miao, M. Liao, L. Zhang, B. Zhou, R. Yang, et al.Permo: Perceiving more at once from a single image for autonomous driving. arXiv preprintarXiv:2007.08116 , 2020.[81] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learningalgorithms. Advances in neural information processing systems , 25, 2012.[82] B. Ru, A. Cobb, A. Blaas, and Y . Gal. Bayesopt adversarial attack. In International conferenceon learning representations , 2020.[83] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in thebandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995 , 2009.[84] P. Liashchynskyi and P. Liashchynskyi. Grid search, random search, genetic algorithm: a bigcomparison for nas. arXiv preprint arXiv:1912.06059 , 2019.[85] C. Guo, J. Gardner, Y . You, A. G. Wilson, and K. Weinberger. Simple black-box adversarialattacks. In International Conference on Machine Learning , pages 2484–2493. PMLR, 2019.[86] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein. Square attack: a query-efficientblack-box adversarial attack via random search. In Computer Vision–ECCV 2020: 16thEuropean Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII , pages484–501. Springer, 2020.[87] J. Bergstra and Y . Bengio. Random search for hyper-parameter optimization. Journal of machinelearning research , 13(2), 2012.[88] C. Wang, Q. Wu, S. Huang, and A. Saied. Economic hyperparameter optimization with blendedsearch strategy. In International Conference on Learning Representations , 2021.[89] V . Zyrianov, X. Zhu, and S. Wang. Learning to generate realistic lidar point clouds. InComputer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27,2022, Proceedings, Part XXIII , pages 17–35. Springer, 2022.[90] W. Ding, B. Chen, B. Li, K. J. Eun, and D. Zhao. Multimodal safety-critical scenarios generationfor decision-making algorithms evaluation. IEEE Robotics and Automation Letters , 6(2):1551–1558, 2021.13[91] S. Suo, K. Wong, J. Xu, J. Tu, A. Cui, S. Casas, and R. Urtasun. Mixsim: A hierarchicalframework for mixed reality traffic simulation. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 9622–9631, 2023.[92] H. Thomas, B. Agro, M. Gridseth, J. Zhang, and T. D. Barfoot. Self-supervised learning of lidarsegmentation for autonomous indoor navigation. In 2021 IEEE International Conference onRobotics and Automation (ICRA) , pages 14047–14053. IEEE, 2021.[93] P. Polack, F. Altch ́e, B. d’Andr ́ea Novel, and A. de La Fortelle. The kinematic bicycle model:A consistent model for planning feasible trajectories for autonomous vehicles? In 2017 IEEEintelligent vehicles symposium (IV) , pages 812–818. IEEE, 2017.[94] Y . Xiong, W.-C. Ma, J. Wang, and R. Urtasun. Learning compact representations for lidarcompletion and generation. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 1074–1083, 2023.[95] J. Kim, R. Mahjourian, S. Ettinger, M. Bansal, B. White, B. Sapp, and D. Anguelov. Stop-net: Scalable trajectory and occupancy prediction for urban autonomous driving. In 2022International Conference on Robotics and Automation (ICRA) , pages 8957–8963. IEEE, 2022.[96] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of MachineLearning Research , 3(Nov):397–422, 2002.[97] R. Liaw, E. Liang, R. Nishihara, P. Moritz, J. E. Gonzalez, and I. Stoica. Tune: A researchplatform for distributed model selection and training. arXiv preprint arXiv:1807.05118 , 2018.14AppendixWe provide additional details on our method, implementation and experimental setups, and then showadditional quantitative / qualitative results. We first detail how we implement ADV3Dincluding howto build digital twins for realistic LiDAR simulation (Sec A.1), how to create a low-dimensionalrepresentation space (Sec A.2) for adversarial shapes, the details of two modern autonomy models(Sec A.3), the adversarial optimization procedure (Sec A.4) and more experimental details. Finallythen provide additional results and analysis in Sec B including full closed-loop and open-loop resultsfor major tables in the main paper and additional experiments and additional qualitative examples.Additionally, we include a supplementary video in https://waabi.ai/adv3d/ , providing anoverview of our methodology, as well as video results on generated adversarial shapes and how itaffect the autonomy performance in different scenarios.A A DV3D Implementation DetailsA.1 Realistic LiDAR SimulationFollowing [ 72,73], we leverage real-world LiDAR data and object annotations to build surfel meshes(textured by per-point intensity value) for the virtual world. For complete background coverage, wedrove through the same scene to collect multiple sets of driving data and then unify multiple LiDARsweeps to a standard map coordinate system. We then aggregate LiDAR points from all framesand apply a dynamic point removal algorithm [ 92] to keep only static points and reconstruct thebackground B. For dynamic actors, we aggregate the LiDAR points within object-centric coordinatebounding boxes for each labeled driving snippet. We then symmetrize the aggregated points alongthe vehicle’s heading axis for a more complete shape. Given the aggregated points, we then estimateper-point normals from 200 nearest neighbors with a radius of 20cm and orient the normals upwardsfor flat ground reconstruction, outwards for more complete dynamic actors. We downsample theLiDAR points into 4cm3voxels and create per-point triangle faces (radius 5cm) according to theestimated normals. Due to sparse observations for most aggregated surfel meshes, we manuallycurated a set of actor meshes that have complete and clean geometry, together with CAD assetspurchased from TurboSquid [74] for a larger asset variety.A.2 Adversarial Shape RepresentationTo ensure realism and watertight manifolds, we use all the CAD cars (sedan, sports car, SUV , Van,and pickup trucks) to build the low-dimensional representation. We first rescale all actors to be ina unit cube with a pre-computed scaling factor ( 1.1×largest dimension for all actors). Then weconvert the input meshes to volumetric signed distance fields (SDF) with a resolution of 100(i.e.,|L|= 1003) using a open-source library2. Then we apply principal component analysis [ 78] on theflattened SDF values Φ∈R|L|×1to obtain the latent representation. Specifically, we use K= 3principle components for constructing the latent space. In practice, we find the larger number we useK, the high-frequency details can be captured but the interpolated shapes can be less realistic. This isbecause the non-major components usually capture the individual details instead of shared propertiesacross all vehicles.During optimization, we first obtain the minimum and maximum latent range zmin∈R3,zmax∈R3,where the minimum and maximum value for each latent dimension is recorded. Then we optimizea unit vector ̄z∈R5, where the first three dimensions are for PCA reconstruction, and the last twodimension indicates the scale value for width and length (range from 0.8to1.3). Then we normalizethis first three dimension to [zmin,zmax]. Given the optimized latent z, we apply Equation (2) inthe main paper to get the generated 3D SDF volumes and then extract the meshes using marchingcubes [ 79] algorithm. The extracted meshes are then scaled to the real-world size and placed in thevirtual world for simulation.2https://github.com/wang-ps/mesh2sdf15A.3 LiDAR-Based Autonomy DetailsBoth autonomy systems tested consist of two parts, where the first part uses different joint perceptionand prediction models, and the second part share the same rule-based planner [ 24]. We assumeperfect control where the ego vehicle can follow the planned trajectory (kinematic bicycle model)exactly. In other words, the driver command policy brings the ego vehicle to the next state (10Hz) byexecuting current planning trajectory for 0.1s. Taking instance-based autonomy for example, giventhe LiDAR points, the perception and prediction models output the actor locations at current andfuture timestamps. Then the sampling-based motion planner [24] produces the optimal plan thatfollows the kinematic bicycle model [93]. All the models are trained on highway driving datasets.Autonomy-A (Instance-Based): We implement a variant of joint P&P model [ 26] to performinstance-based joint detection and trajectory prediction. For the 3D object detection part, we use amodified two-stage PIXOR [ 13] model following [ 94] which takes voxelized LiDAR point clouds asinput and outputs the BEV bounding box parameters for each object. For the trajectory predictionpart, we use a model that takes lane graph and detection results as input and outputs the per-timestependpoint prediction for each object. The prediction time horizon is set to 6 seconds.Autonomy-B (Instance-Free): We also verify our method on an instance-free autonomy sys-tem [ 31] for joint detection and motion forecasting to show its generalizability. Specifically, wereplace the P&P model used in Autonomy-A with the occupancy-based model, which performsnon-parametric binary occupancy prediction as perception results and flow prediction as motionforecasting results for each query point on a query point set. The occupancy and flow prediction canserve as the input for the sampling-based planner to perform motion planning afterwards.A.4 Adversarial Optimization DetailsAdversarial Objectives: The adversarial objective is given in Eqn. (3-5) in the main paper, wherewe set λpred= 0.1andλplan= 0.5for Autonomy-A. The adversarial objective for Autonomy-B alsoincludes three terms: ldet,lpredandlplan, and we keep lplanas is since the PLT model we used is thesame as Autonomy-A. However, the ImplicitO model in Autonomy-B does not have instance-levelbounding box results, and the confidence score as well as IoU terms are no longer applicable. Wethus follow [ 31,95] and use the Soft-IoU metric to assess occupancy predictions. Similarly, we usethe foreground mean end-point error (EPE) to measure the average L2 flow error at each occupiedquery point as done in [ 31], as the instance-based trajectory prediction is not available. Formally, theadversarial objective for Autonomy-B is defined as:Ct=ltdet+λpredltpred+λplanctplan, (6)ltdet=−Pq∈Qo(q)ˆo(q)Pq∈Q(o(q) + ˆo(q)−o(q)ˆo(q)), (7)ltpred=1Pq∈Qo(q)Xq∈Qo(q)||f(q)−ˆf(q)||2, ctplan=ctjerk+ctlat, (8)where Qis the query point set, o(q)andˆo(q)are ground truth and predicted binary occupancyvalue∈[0,1]on the query point q, respectively. Flow vector f:R3→R2and the correspondingprediction ˆfspecifies the BEV motion of any agent that occupies that location. We set λpred= 1.0andλplan= 0.5for Autonomy-B.Black-box Optimization Details: To handle different modern autonomy systems and the non-differentiable LiDAR simulator, we adopt the black-box optimization in ADV3D. Inspired by existingworks [ 56,58,61], we adopt the Bayesian Optimization [ 81,82] (BO) as the search algorithm, whichmaintains a surrogate model and select the next query candidate based on historical observationsand acquisition function. Specifically, we use a standard Gaussian process (GP) model with UpperConfidence Bound [ 96,83] (UCB) as the acquisition function. We set the exploration multiplierβ= 1.0to balance exploitation and exploration. Since the adversarial landscape is not locally smooth,we use the Mat ́ern 3/2 kernel (product over each dimension with a length scale of 0.1) for the GPmodel. Unless stated otherwise, we set the total query budget as 100 and the first 11 queries are usedfor the initialization.16We also benchmark the other popular black-box algorithms including grid search [ 84] (GS), randomsearch [ 85,86,87] (RS) and blend search (BS) [ 88]. For GS, we set 3 search points per dimensionthus in total 35= 243 queries. For a random search, we set the query budget as 500 to achieve betterperformance. BS is an economical hyperparameter optimization algorithm that combines local searchand global search. We adopt the official implementation3. We also compare a baseline that conductsbrute-forcing (BF) over the curated asset library with 746 vehicles and find the worst-case actor shape.Our optimization pipeline is built on the Ray Tune framework [97].A.5 Additional Experimental DetailsRealism Evaluation for Generated Shapes: We evaluate the realism of ADV3D usingJensen–Shannon divergence [ 89] (JSD) between generated shapes by ADV3Dand vertex defor-mation (VD). Specifically, we calculate JSD by uniformly sampling point clouds of 1000 points fromthe optimized shapes with the birds-eye-view 2D histogram of all CAD models in our asset sets(resolution of 100×100).A.6 Additional DiscussionsOffroad or Collision Evaluation for Planning? We do not adopt offroad and collision as adver-sarial objectives or evaluation metrics. This is because the planner [ 24] leverages map information toprevent offroad trajectories to ensure safety driving. Moreover, adding a collision loss requires carefulformulation to provide stable supervision (such as through actor-ego distance or time-to-collision),as the vanilla implementation results in a sparse and noisy objective function to optimize. To easeoptimization stability and enable fast convergence, we designed our objective function to leveragelosses that can be easily computed for perception, prediction, and planning at each timestep overthe full scenario. We also do not modify actor behavior, making it challenging to generate collisionoutcomes in the SDV on real-world highway driving scenarios.Gradient-based attacks for Adv3D? Gradient-based attacks require fully differentiable autonomyand simulation systems. Since the goal of Adv3D is to cater to any autonomy system, not just thosethat are fully differentiable, we chose the black-box algorithm BO to ensure broader applicability.For systems that are completely differentiable, one may pursue gradient-based white-box attacks witha differentiable simulator, which can be stronger and more efficient.B Additional Results and AnalysisAttacking Full Autonomy Stack: To take into account the full autonomy stack, we find it isimportant to use an adversarial objective that takes a combination of submodule costs. In Tab. A7,we provide the full results for Tab. 3 in the main paper, including missing combinations M4andM5. Moreover, we also compare with the results in closed loop using shapes generated by open-loopattack.Latent Asset Representation: We repeat the experiment from Tab. 3 but now using a lower density100 triangle mesh which has a lower dimension thus can be optimized with BO. Results in Tab. A8show that large vertex deformation is required to achieve similar attack strength as Adv3D. Moreover,the vertex deformed actors are overly simplified and unrealistic with noticeable artifacts.3https://github.com/microsoft/FLAML17#ID Opt. SettingsPerception Prediction Planning AP↑ Recall ↑ minADE ↓meanADE ↓ Lat.↓ Jerk↓PtltdetPtltpredPtctplan (%,@0.5) (%,@0.5)L2error L2error (m/s2) (m/s3)Original 88.7 89.4 2.51 4.99 0.261 0.294M1Open-Loop✓80.4 87.6 2.01 4.95 0.256 0.301Closed-Loop 69.6 71.4 1.97 5.02 0.239 0.310M2Open-Loop✓83.5 88.5 2.52 5.39 0.223 0.341Closed-Loop 83.1 89.1 2.92 6.34 0.254 0.412M3Open-Loop✓87.2 88.8 2.57 5.38 0.305 0.352Closed-Loop 86.7 88.3 2.94 6.03 0.324 0.434M4Open-Loop✓ ✓79.9 85.9 2.57 5.35 0.231 0.353Closed-Loop 70.1 78.8 2.90 5.98 0.223 0.401M5Open-Loop✓ ✓81.2 84.3 2.57 5.60 0.333 0.253Closed-Loop 72.3 75.0 2.95 6.04 0.342 0.401M0Open-Loop✓ ✓ ✓85.5 87.7 2.73 5.99 0.262 0.372Closed-Loop 75.4 76.4 2.82 6.21 0.411 0.410Table A7: Full table of adversarial optimization for the full autonomy stack. Unlike existingworks that consider sub-modules, ADV3Dgenerates actor shapes that are challenging to all down-stream modules.AlgorithmsAP↑ Recall ↑ minADE ↓ Jerk↓JSD(%,@0.5) (%,@0.5)L2error (m/s3)Original 98.7 99.6 4.70 0.090 –VD:0.05m 98.7 99.6 5.04 0.090 0.057VD:0.1m 98.7 99.6 5.10 0.090 0.137VD:0.2m 98.7 99.6 5.16 0.090 0.253VD:0.5m 78.3 81.2 5.77 0.103 0.745VD:1.0m 45.5 48.5 5.79 0.155 0.758ADV3D (ours) 50.3 55.8 7.87 0.111 0.175Table A8: Compare with vertex deformation.Figure A6: Qualitative comparisons with Vertex-Deformation (VD). Top and bottom show side andtop-down views respectively.Additional Qualitative Examples: We provide more qualitative examples in Figure A7, A8 and A9to show that ADV3Dis able to generate safety-critical actor shapes for autonomy testing withappearance coverage.18Adv. Closed-LoopOriginalMissed detections of occluded vehicleAdv. Closed-LoopOriginalMissed detections of occluded vehicle reactive actors)Figure A7: Qualitative examples of adversarial shape generation (non-reactive actors vs reactiveactors). A DV3D is able to generate adversarial actors that cause detection failures due to occlusion.Adv. Closed-LoopOriginalMissed detections of occluded vehicleAdv. Open-LoopFigure A8: Qualitative examples of adversarial shape generation in closed loop vs open loop. ADV3Dis able to generate adversarial actors that cause detection failures due to occlusion but the open-loopcounterpart fails.Adv. Closed-LoopOriginalMulti-actor detection failure, planned trajectory affectedFigure A9: Qualitative examples of adversarial shape generation with multi-actor attacks. ADV3Dcreates three safety-critical shapes that cause detection failures for all front actors.19 |
0hPkttoGAf | RVT: Robotic View Transformer for 3D ObjectManipulationAnkit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter FoxNVIDIAAbstract: For 3D object manipulation, methods that build an explicit 3D rep-resentation perform better than those relying only on camera images. But usingexplicit 3D representations like voxels comes at large computing cost, adverselyaffecting scalability. In this work, we propose RVT, a multi-view transformer for3D manipulation that is both scalable and accurate. Some key features of RVT arean attention mechanism to aggregate information across views and re-renderingof the camera input from virtual views around the robot workspace. In simula-tions, we find that a single RVT model works well across 18 RLBench tasks with249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct). It also trains 36X faster than PerAct for achieving thesame performance and achieves 2.3X the inference speed of PerAct. Further, RVTcan perform a variety of manipulation tasks in the real world with just a few ( ∼10)demonstrations per task. Visual results, code, and trained model are provided at:https://robotic-view-transformer.github.io/.Keywords: 3D Manipulation, Multi-View, Transformer1 IntroductionA fundamental goal of robot learning is to build systems that can solve various manipulation tasks inunconstrained 3D settings. A popular class of learning methods directly processes image(s) viewedfrom single or multiple cameras. These view-based methods have achieved impressive success on avariety of pick-and-place and object rearrangement tasks [1, 2, 3, 4]. However, their success on tasksthat require 3D reasoning has been limited. As shown by James et al. [5] and Shridhar et al. [6],view-based methods struggle at 3D manipulation tasks on RLBench [7] with less than 2% success.012345678910111213141516Training Time (in Days)0102030405060Avg. Success36X Faster1.26XRVTPerActFigure 1: RVT scales and performs betterthan PerAct on RLBench, achieving on-par performance in 36X less time (samehardware), and 1.26X peak performance.To address this, methods have been proposed that rea-son with explicit 3D representations of the scene. C2F-ARM [5] represents the scene with multi-resolutionvoxels and achieves strong performance on difficultRLBench tasks. PerAct [6] improves upon C2F-ARMin behavior cloning by using perceiver transformer [8]to process voxels. However, creating and reasoningover voxels comes at a higher computing cost com-pared to reasoning over images, since the number ofvoxels scales cubicly with the resolution as opposedto squarely for image pixels. This makes voxel-basedmethods less scalable compared to their view-basedcounterparts. In fact, training PerAct on 18 RLBenchtasks takes 16 days using 8 V100 GPUs (3072 GPUhours). This hinders fast development and prototyp-ing. Moreover, such computing requirements become even more prohibitive when scaling to largerdatasets with more tasks and diversity.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Hence, a key question is – can we build a manipulation network that not only performs well butalso inherits the scalability of view-based methods? To this end, we propose RVT ( Robotic ViewTransformer) that significantly outperforms the SOTA voxel-based method both in terms of successrate and training time, as shown in Fig. 1. With the same hardware, RVT achieves the peak per-formance of PerAct in 36X less time , decreasing the training time from 14 days to just 10 hours.Apart from being much faster to train, RVT also achieves a 26% higher success rate than PerAct,averaged over 18 tasks (249 task variations) on RLBench. RVT outperforms PerAct on 88.9% oftasks on RLBench while achieving 2.3X the inference speed (11.6 vs 4.9 fps). Further, we findthat RVT also works well in the real world, where with only 51 demonstrations, a single RVT modelcan learn to perform a variety of manipulation tasks (5 tasks, 13 variations) like opening a drawer,placing objects on a shelf, pressing hand sanitizer, and stacking objects (see Fig. 4).At its core, RVT is a view-based method that leverages the transformer architecture. It jointly attendsover multiple views of the scene and aggregates information across the views. It then producesview-wise heatmaps and features that are used to predict robot end-effector pose. We extensivelyexplore the design of the multi-view architecture and report several useful findings. For example,we observe a better performance when enforcing the transformer to first attend over patches withinthe same image before concatenating the patches for joint attention.Another key innovation is that, unlike prior view-based methods, we decouple the camera imagesfrom the images fed to the transformer, by re-rendering the images from virtual views . This allowsus to control the rendering process and leads to several benefits. For example, we can re-render fromviewpoints that are useful for the task (e.g., directly above the table) while not being restricted byreal-world physical constraints. Also, since the multi-view input to RVT is obtained via re-rendering,we can use RVT even with a single sensor camera – as done in our real-world experiments.To summarize, our contributions are threefold: first, we propose RVT, a multi-view transformer for3D object manipulation that is accurate and scalable; second, we investigate various design choicesfor the multi-view transformer that lead to better object manipulation performance; and finally, wepresent an empirical study for multi-task object manipulation in simulation and the real world.2 Related WorkVision-based Object Manipulation. The learning of robotic control policy has been tradition-ally studied with low-dimensional state observations [9, 10, 11, 12, 13]. Recently, vision-basedpolicies [14, 15, 16, 17, 18, 19, 20, 21] have gained increasing attention since the high-dimensionalvisual sensory input provides more generalizable observation representation across tasks and is moreaccessible in real-world perception systems. Various forms of visual input have been explored. Priorwork has directly encoded the RGB images into a low-dimensional latent space and relied on model-based [22, 23] or model-free [24, 25] reinforcement learning (RL) to train policies to operate in thisspace. More recently, RT-1 [26] infers the robot’s actions from a history of images by leveragingtransformer architectures [27]. Our proposed RVT also uses a transformer to predict actions, how-ever, unlike RT-1, we additionally leverage depth to construct a multi-view scene representation.The use of depth input has also been extensively studied. Methods like CLIPort [3] and IFOR [1]directly process the RGB-D images for object manipulation, and hence are limited to simple pick-and-place tasks in 2D top-down settings. To overcome this issue, explicit 3D representations suchas point clouds have been utilized. C2F-ARM [5] and PerAct [6] voxelize the point clouds and usea 3D convolutional network as the backbone for control inference. However, high-precision taskstypically require high resolution of voxelization, resulting in high memory consumption and slowtraining. Our approach falls into this category but addresses the scalability issue by transforming thepoint cloud into a set of RGB-D images from multiple views. We show that this significantly im-proves memory footprint and training efficiency, and leads to higher performance when compared todirectly working with RGB(-D) or point cloud input (see Table. 1). Another relevant work is MIRA[28], which also uses novel view images to represent the 3D scene for action inference. MIRAachieves this by implicitly constructing a neural radiance field representation (NeRF) of the scene2a. Sensor Datab. Point Cloud and Virtual Camerasc. Virtual Imagese. Prediction in 3Dd. Predicted HeatmapsMulti-View TransformerFigure 2: Overview of RVT. Given RGB-D from sensor(s), we first construct a point cloud of thescene. The point cloud is then used to produce virtual images around the robot workspace. Thevirtual images are fed to a multi-view transformer model to predict view-specific features, which arethen combined to predict action in 3D.from a set of RGB images and then generating novel view images from the optimized NeRF model.However, the requirement of optimizing a scene NeRF model slows down the inference speed at testtime and relies on RGB images from a dense set of views as input. In contrast, our approach canachieve significantly faster inference speed and can work with even a single-view RGB image.Multi-Task Learning in Robotics. Learning a single model for many different tasks has beenof particular interest to the robotics community recently. A large volume of work achieves themulti-task generalization by using a generalizable task or action representation such as object pointcloud [18, 19], semantic segmentation and optical flow [1], and object-centric representation [29,30]. However, the limited expressiveness of such representations constrains them to only generalizewithin a task category. Task parameterization [31, 32] and discrete task-dependent output layer [33,34] approaches are investigated with reinforcement learning to learn policies for tasks in differentcategories. With the recent breakthrough in large language models, multi-task robot learning hasbeen approached by using natural language to specify a broad range of tasks and learning the policyfrom large pre-collected datasets [35, 26, 36, 2, 37, 38, 39, 40, 41]. We are inspired by this successbut propose to learn language-conditioned multi-task policies with a small demonstration dataset.Transformers for Object Manipulation. The success of transformers in vision and NLP has ledits way into robot learning [42, 43, 44, 17, 45, 46]. Especially in object manipulation, transformer-based models with an attention mechanism can be utilized to extract features from sensory inputs toimprove policy learning [47, 48, 49, 50, 51]. Unlike most prior work, we do not use large datasetsfor training. RVT efficiently learns from a small set of demonstrations, handle multiple views asvisual inputs, and fuses information from language goals to tackle multiple manipulation tasks.Multi-View Networks in Computer Vision. Multi-view representations have been explored invarious vision problems. For point cloud recognition, SimpleView [52] showed how a simple view-based method outperforms sophisticated point-based methods. Follow-up works like MVTN [53]and V oint cloud [54] further improved upon SimpleView. Multi-view representations have also beenused for other problems like 3D visual grounding [55], view synthesis [56] and depth prediction [57].Unlike them, we focus on the problem of predicting robot actions for object manipulation.3 MethodOur goal is to learn a single model that can complete a wide range of manipulation tasks. Theinput consists of (1) a language description of the task, (2) the current visual state (from RGB-Dcamera(s)), and (3) the current gripper state (open or closed). The model should predict an action,specified by a target end-effector pose and gripper state at the next key-frame . The key-framesrepresent important or bottleneck steps of the gripper during the task execution [58], such as a pre-pick, grasp, or place pose. Given a target end effector pose, we assume a low-level motion planner3and controller that can move the end effector to the target pose. To train the model, we assume adataset D={D1, D2,···, Dn}ofnexpert demonstrations covering various tasks is given. Eachdemonstration Di= ({oi1...m i},{ai1...m i}, li)is a successful roll-out of length mi, where liis thelanguage description of the task, {oi1, oi2, ..., oimi}is a sequence of the observations from RGB-Dcamera(s) with gripper state, and {ai1, ai2, ..., aimi}is the sequence of corresponding robot actions.This demonstration dataset can be used to train models with behavior cloning.Our proposed method (RVT) is a transformer model [27] that processes images re-rendered aroundthe robot workspace, produces an output for each view, and then back-projects into 3D to predictgripper pose actions, as shown in Fig. 2.Rendering. The first step is the re-rendering of camera input. Given the RGB-D image(s) capturedby one or multiple sensor cameras, we first reconstruct a point cloud of the scene. The point cloud isthen re-rendered from a set of virtual viewpoints anchored in the space centered at the robot’s base(see Fig. 2 and Fig. 3). Specifically, for each view, we render three image maps with a total of 7channels: (1) RGB (3 channels), (2) depth (1 channel), and (3) (x, y, z )coordinates of the points inthe world frame (3 channels). The (x, y, z )coordinates help establish the correspondence of pixelsacross views, i.e., if pixels from different views share the same (x, y, z ), they correspond to the samepoint in 3D. We use PyTorch3D [59] for rendering. We empirically verify various design choices inour rendering pipeline (see Tab. 2 (left)).The re-rendering process decouples the input images to the ones fed to the transformer. This offersseveral benefits such as: the ability to re-render at arbitrary and useful locations (e.g., directly abovethe table) while not being constrained by real-world camera placements; multi-view reasoning evenwith a single sensor camera; allowing the use of orthographic images instead of generally providedperspective ones; facilitating 3D point-cloud augmentations and enabling additional channels likepoint correspondence which are not natively presented in the sensor images. We empirically findthat these contribute to achieving high performance with view-based networks (see Sec. 4.1).Joint Transformer. The re-rendered images, the language description of the task, and the gripperstate (open or close) are processed by a joint transformer model (see Fig. A1 in the appendix). Forlanguage, we use pretrained CLIP [60] embeddings (ResNet-50 variant), which provide one tokenfor each word. For the virtual images, we break each of them into 20×20patches and pass througha multi-layer perceptron (MLP) to produce image tokens, similar to ViT [61]. For the gripper state,similar to PerAct [6], we pass it through an MLP and concatenate it to the image tokens. We also addpositional embeddings to all the image and language tokens to preserve the positional information.Overall, RVT has eight self-attention layers. In the first four layers, an image token is only allowedto attend to other tokens from the same image. This biases the network to process individual imagesfirst before sharing information across images. We concatenate all the image tokens along with thelanguage tokens afterward. In the last four layers, we allow the attention layers to propagate andaccumulate information across different images and text. Finally, the image tokens are rearrangedback to the original spatial configuration, resulting in the feature channels of each image.Action Prediction. The model outputs an 8D action, including the 6-DoF target end effector pose(3-DoF for translation and 3-DoF for rotation), 1-DoF gripper state (open or close), and a binaryindicator for whether to allow collision for the low-level motion planner (see [6] for details). Fortranslation, we first predict a heatmap for each view from the per-image features from the joint trans-former (as shown in Fig. A1 in the appendix). The heatmaps across different views are then back-projected to predict scores for a discretized set of 3D points that densely cover the robot workspace(see Sec. A.3 in the appendix). Finally, the end effector translation is determined by the 3D pointwith the highest score. Note that this multi-view heatmap representation for translation predictionextends prior approaches in the 2D top-down view setting [4]. Hence, RVT inherits the benefit of su-perior sample efficiency by representing the visual input and action in the same spatial structure [4].For end effector rotation, we follow PerAct to use the Euler angles representation, where each angleis discretized into bins of 5◦resolution. The gripper state and the motion planner collision indicatorare represented as binary variables. To predict the rotations, gripper state, and collision indicator,4Avg. Avg. Train time Inf. Speed Close Drag Insert Meat off Open Place PlaceModels Success ↑Rank↓(in days) ↓(in fps) ↑ Jar Stick Peg Grill Drawer Cups WineImage-BC (CNN) [2, 6] 1.3 3.7 - - 0 0 0 0 4 0 0Image-BC (ViT) [2, 6] 1.3 3.8 - - 0 0 0 0 0 0 0C2F-ARM-BC [5, 6] 20.1 3.1 - - 24 24 4 20 20 0 8PerAct [6] 49.4 1.9 16.0 4.9 55.2 ±4.789.6 ±4.1 5.6±4.170.4 ±2.088.0 ±5.72.4±3.244.8 ±7.8RVT (ours) 62.9 1.1 1.0 11.6 52.0 ±2.599.2 ±1.611.2 ±3.088.0 ±2.571.2 ±6.94.0±2.591.0 ±5.2Push Put in Put in Put in Screw Slide Sort Stack Stack Sweep to TurnModels Buttons Cupboard Drawer Safe Bulb Block Shape Blocks Cups Dustpan TapImage-BC (CNN) [2, 6] 0 0 8 4 0 0 0 0 0 0 8Image-BC (ViT) [2, 6] 0 0 0 0 0 0 0 0 0 0 16C2F-ARM-BC [5, 6] 72 0 4 12 8 16 8 0 0 0 68PerAct [6] 92.8 ±3.028.0 ±4.4 51.2 ±4.7 84.0 ±3.617.6 ±2.074.0 ±13.0 16.8 ±4.726.4 ±3.2 2.4±2.052.0 ±0.088.0 ±4.4RVT (ours) 100.0 ±0.049.6 ±3.2 88.0 ±5.7 91.2 ±3.048.0 ±5.781.6 ±5.436.0 ±2.528.8 ±3.926.4 ±8.272.0 ±0.093.6 ±4.1Table 1: Multi-Task Performance on RLBench. RVT outperforms state-of-the-art methods whilebeing faster to train and execute. RVT has the best success rate and rank when averaged across alltasks. Performance for Image-BC (CNN), Image-BC (ViT) and C2F-ARM-BC are as reported byShridhar et al. in [6]. We re-evalaute PerAct using the released final model and estimate mean andvariance. RVT is 2.3X faster on execution speed than PerAct and outpeforms it on 16/18 tasks. Thetraining time and inference speed of PerAct and RVT are measured on the same GPU model.we use global features ( G). The global features are a concatenation of (1) the sum of image featuresalong the spatial dimensions, weighted by the predicted translation heatmap; and (2) the max-pooledimage features along the spatial dimension. Specifically, let fibe the image feature and hibethe predicted translation heatmap for the ith image. Then the global feature Gis given by G=[φ(f1⊙h1);···;φ(fK⊙hK);ψ(f1);···;ψ(fK)], where Kis the number of images, ⊙denoteselement-wise multiplication, and φandψdenote the sum and max-pooling over the height andwidth dimensions. The weighted sum operation provides higher weights to image locations near thepredicted end effector position.Loss Function. We train RVT using a mixture of losses. For heatmaps, we use the cross-entropyloss for each image. The ground truth is obtained by a truncated Gaussian distribution around the2D projection of the ground-truth 3D location. For rotation, we use the cross-entropy loss for eachof the Euler angles. We use binary classification loss for the gripper state and collision indicator.4 Experiments4.1 Simulation ExperimentsSimulation Setup. We follow the simulation setup in PerAct [6], where CoppelaSim [62] is appliedto simulate various RLBench [7] tasks. A Franka Panda robot with a parallel gripper is controlledto complete the tasks. We test on the same 18tasks as PerAct, including picking and placing, tooluse, drawer opening, and high-accuracy peg insertions (see the appendix for a detailed specificationof each task). Each task includes several variations specified by the associated language description.Such a wide range of tasks and intra-task variations requires the model to not just specialize in onespecific skill but rather learn different skill categories. The visual observations are captured fromfour noiseless RGB-D cameras positioned at the front, left shoulder, right shoulder, and wrist with aresolution of 128×128. To achieve the target gripper pose, we generate joint space actions by usingthe same sampling-based motion planner [63, 64] as in [5, 6].Baselines. We compare against the following three baselines: (1) Image-BC [2] is an image-to-action behavior cloning agent that predicts action based on the image observations from the sensorcamera views. We compare with two variants with CNN and ViT vision encoders respectively. (2)C2F-ARM-BC [5] is a behavior cloning agent that converts the RGB-D images into multi-resolutionvoxels and predicts the next key-frame action using a coarse-to-fine scheme. (3) PerAct [6] is thestate-of-the-art multi-task behavior cloning agent that encodes the RGB-D images into voxel gridpatches and predicts discretized next key-frame action using the perceiver [8] transformer.Training and Evaluation Details. Just like the baselines, we use the RLBench training dataset with100expert demonstrations per task ( 1800 demonstrations over all tasks). Similar to PerAct, we apply5a. Cubec. Rotated Cube 15°d. RLBench Real b. Cube - 3 Viewse. Perspective Proj.f. Orthographic Proj.Figure 3: We evaluate RVT with various camera locations for re-rendering (a-d) and find that loca-tions in (a) perform best. We also test various projection options (e-f) for rendering images and findthat RVT works better with orthographic images.translation and rotation data augmentations. For translation, we randomly perturb the point clouds inthe range [±0.125m,±0.125m,±0.125m]. For rotation, we randomly rotate the point cloud aroundthez-axis (vertical) in the range of ±45◦. We train RVT for 100k steps, using the LAMB [65]optimizer as PerAct. We use a batch size of 24 and an initial learning rate of 2.4×10−4. We usecosine learning rate decay with warm-start for 2K steps.For Image-BC and C2F-ARM-BC, we adopt the evaluation results from [6] since their trained mod-els have not been released. These results overestimate the performance of Image-BC and C2F-ARM-BC, as they select the best model for each of the 18 tasks independently based on the performanceon validation sets. Hence, the reported performance does not reflect a single multi-task model. Nev-ertheless, these baselines still underperform both PerAct and RVT (see Tab. 1). For PerAct, weevaluate the final model released by Shridhar et al. [6]. We test our models (including the modelsin the ablation study, Tab. 2 (left)) and PerAct on the same 25variations for each task. Due to therandomness of the sampling-based motion planner, we run each model five times on the same 25variations for each task and report the average success rate and standard deviation in Tab. 1.To fairly compare the training efficiency against PerAct, we train both PerAct and our model withthe same GPU type (NVIDIA Tesla V100) and number of GPUs (8), as reported by Shridhar et al.[6]. We report the total training time for both models in Tab. 1 (“Training time”). We also evaluatethe inference speed of PerAct and RVT models by running the prediction inferences for the sameinput data on the same GPU (NVIDIA RTX 3090).Multi-Task Performance. Tab. 1 compares the performance between RVT and the baselines. Wefind that PerAct and RVT perform significantly better than the rest. Overall, RVT outperforms allbaselines with the best rank and success rate when averaged across all tasks. It outperforms priorstate-of-the-art methods, C2F-ARM, by 42 percentage points (213% relative improvement); andPerAct by 13 percentage points (26% relative improvement). RVT outperforms PerAct on 88.9%(16/18) of the tasks. More remarkably, RVT trains 36X faster than PerAct for achieving the sameperformance (see Fig. 1). We also observe that at inference time, RVT is 2.3X faster than PerAct.These results demonstrate that RVT is both more accurate and scalable when compared to existingstate-of-the-art voxel-based methods.Ablation Study. We conduct ablation experiments to analyze different design choices of RVT: (a)the resolution of the rendered images (“Im. Res.” column in Tab. 2 (left)); (b) whether to includethe correspondence information across rendered images (“View Corr.”); (c) whether to include thedepth channel (“Dep. Ch.”); (d) whether to separately process the tokens of each image beforejointly processing all tokens (“Sep. Proc.”); (e) the projection type for rendering—perspective ororthographic (“Proj. Type”); (f) whether to use rotation augmentation (“Rot. Aug.”); (g) the numberof views and camera locations for re-rendering (“# of View” and “Cam. Loc.”); and (h) the benefitof using re-rendered images versus using real sensor camera images (“Real” for “Cam. Loc.”).Tab. 2 (left) summarizes the ablation experiment results. The same table along with the mean andstandard dev. for each task can be found in the appendix Tab. A2. Below we discuss the findings:(a) As expected, virtual images rendered at higher resolution help as RVT with virtual image reso-lution 220 outperforms the one with 100.6Im. View Dep. Sep. Proj. Rot. Cam # of Avg.Res. Corr. Ch. Proc. Type Aug. Loc. View Succ.220 ✓ ✓ ✓ Orth. ✓ Cube 5 62.9100 ✓✓✓ Orth. ✓ Cube 5 51.1220 ✗ ✓ ✓ Orth. ✓ Cube 5 59.7220 ✓✗✓ Orth. ✓ Cube 5 60.3220 ✓ ✓ ✗ Orth. ✓ Cube 5 58.4220 ✓✓✓ Pers. ✓ Cube 5 40.2220 ✓ ✓ ✓ Orth. ✗ Cube 5 60.4220 ✓✓✓ Orth. ✓ Cube 3 60.2220 ✓ ✓ ✓ Orth. ✓ Front 1 35.8220 ✓✓✓ Orth. ✓ Rot. 15 5 59.9220 ✓ ✓ ✓ Pers. ✗ Real 4 10.4220 ✓✓✓ Orth. ✗ Real 4 22.9# of # of # of PerAct RVT PerAct RVTTask vari. train test (+ mark.) (+ mark.) (- mark.) (- mark.)Stack3 14 10 50% 100% 50% 100%blocksPresssanitizer1 710 40% 80% 40% 80%Put marker4 12 10 0% 0% – –in mug/bowlPut objectin drawer3 10 10 20% 50% 50% 100%Put object2 8 10 30% 50% 30% 50%in shelfAll tasks 13 51 50 28% 56% 42.5% 82.5%Table 2: Left: Ablations on RLBench. A larger res., adding view correspondence, adding depthchannel, separating initial attention layers, orthographic projection, using rotation aug., and re-rendered views around cube improve performance. Right: Success rate of RVT and Peract in thereal-world. A single RVT model can perform well on most tasks with only a few demonstrations.(b) Adding correspondence information for points across different views helps (see Sec. 3). Thisis likely because the network need not learn to solve the correspondence problem and can predictmore consistent heatmaps across views. Note that the view correspondence channel is not present insensor images but is rendered along with RGB(D) images in RVT.(c) Adding the depth channel along with RGB channels helps, likely because it aids 3D reasoning.(d) Independently processing the tokens from a single image, before merging all the image tokens,helps. It is likely because this design expects the network to extract meaningful features for eachimage before reasoning over them jointly.(e) Rendering images with orthographic projection performs better than rendering with perspectiveprojection, for both the cube and real camera locations. We hypothesize that it is because ortho-graphic projection preserves the shape and size of an object regardless of its distance from thecamera (see Fig. 3 (e-f)). It also highlights the advantage of re-rendering, as real sensors generallyrender with perspective projections.(f) As expected, using 3D rotation augmentation in the point cloud before rendering helps. To takeadvantage of 3D augmentations, the re-rendering process is necessary.(g) The model with 5views around a cube (Fig. 3 (a)) performs the best followed by the one with3views (front, top, left) around a cube (Fig. 3 (b)). The single view model, where we predict thethird coordinate as an offset like TransporterNet [4], performs substantially worse, calling for theneed for multiple views for 3D manipulation. It also highlights the advantage of re-rendering as withre-rendering we can leverage multiple views even with a single sensor camera. We also empiricallyfind that rotating the location of the cameras by 15◦(see Fig. 3) with respect to the table (and robot)decreases performance. This could be likely because views aligned with the table and robot mightbe easier to reason with (e.g., overhead top view, aligned front view).(h) RVT performs better with re-rendered images as compared to using sensor camera images (Tab. 2(left), second last row). The sensor camera images are rendered with perspective projection (physicalrendering process) and are not straightforward to apply 3D augmentations (e.g., rotation) withoutre-rendering. Also, the location of sensor cameras may be sub-optimal for 3D reasoning, e.g., theviews are not axially aligned with the table or robot (see Fig. 3 (d)). All these factors contribute toRVT performing better with re-rendered images than with sensor camera images.Notably, one might consider rearranging the sensor cameras to match the re-rendering views in orderto bypass re-rendering (see appendix A.2). However, this will void the gains from using orthographicprojections, 3D augmentation, and adding correspondences (see appendix A.3). This also strictlyrequires a multi-camera setup (Fig. 3 (a)), which is more costly and less portable in the real worldthan using one sensor camera. Finally, we have briefly explored view selection and found an optionthat works well across tasks. Further optimization of views, including the sensor and re-renderedones, is an interesting future direction.7Put orange bottle in top drawer Put yellow cube in the bottom shelf Figure 4: Examples of RVT in the real world. A single RVT model can perform multiple tasks (5tasks, 13 variations) in the real world with just ∼10 demonstrations per task.4.2 Real-WorldReal World Setup. We experiment on a table-top setup using a statically mounted Franka Pandaarm. The scene is perceived via an Azure Kinect (RGB-D) camera statically mounted in a third-person view. We calibrate the robot-camera extrinsics and transform the perceived point cloudsto the robot base frame before passing into RVT. Given a target gripper pose from RVT, we useFrankaPy [66] to move the robot to the target with trajectory generation and feedback control.Tasks. We adopt a total of 5 tasks similar to the ones in PerAct [6] (see Tab. 2 (right)): stackblocks ,press sanitizer ,put marker in mug/bowl ,put object in drawer ,put object in shelf . Each taskcan be instantiated with different variations defined by the language description. For example, forstack blocks , some variations could be “put yellow block on blue block” and “put blue block on redblock”. Given a task and variation, we sample a scene by placing the task-related objects and a setof distractor objects on the table in a random configuration.Data Collection. We first collect a dataset for training RVT through human demonstration. Givena sampled task and scene configuration, we ask the human demonstrator to specify a sequence ofgripper target poses by kinesthetically moving the robot arm around. Once we have the target posesequence, we reset the robot to the start pose, and then control it to sequentially move to each targetpose following the specified order. We simultaneously record the RGB-D stream from the cameraduring the robot’s motion to the targets. This provides us with a dataset of RGB-D frames pairedwith target pose annotations. In total, we collected 51 demonstration sequences over all 5 tasks.Results. We train on real-world data for 10K steps, with the same optimizer, batch size, and learningrate schedule as the simulation data. We report the results in Tab. 2 (right). We find RVT outperformsprior method PerAct. It achieves high success rates for the stack block task (100%) and the presssanitizer task (80%). Even on longer horizon tasks such as putting objects in drawers and shelves(e.g., the robot has to first open the drawer/shelf and then pick up the object), our model achieves50% success rates (see Fig. 4). We found RVT struggled with marker-related tasks, which is likelydue to sparse and noisily sensed point clouds. We further divide the results into two sets: “+ markers”(full set) and “- markers”. Our model overall achieves an 82.5% success rate on non-marker tasks.The marker issue can potentially be addressed by attaching the camera to the gripper to capture pointclouds at higher quality. Another possibility is to use zoom-in views similar to C2F-ARM [5].5 Conclusions and LimitationsWe proposed RVT, a multi-view transformer model for 3D object manipulation. We found that RVToutperforms prior state-of-the-art models like PerAct and C2F-ARM on a variety of 3D manipula-tion tasks, while being more scalable and faster. We also found that RVT can work on real-worldmanipulation tasks with only a few demonstrations.Although we found RVT achieves state-of-the-art results on RLBench (62.9% success rate), there isscope for improvement. Following, we identify some limitations that present exciting directions forfuture research. We briefly explore various options for virtual view and found the orthogonal viewswork well across tasks, but it would be exciting if the virtual views can be optimized further orlearned from data. Further, when compared to prior view-based methods, RVT (as well as explicitvoxel-based methods like PerAct and C2F-ARM), requires the calibration of extrinsics from thecamera to the robot base. It would be exciting to explore extensions that remove this constraint.8References[1] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng, and D. Fox. Ifor: Iterativeflow minimization for robotic object rearrangement. In arXiv:2202.00732 , 2022.[2] E. Jang, A. Irpan, M. Khansari, D. Kappler, F. Ebert, C. Lynch, S. Levine, and C. Finn. BC-z:Zero-shot task generalization with robotic imitation learning. In 5th Annual Conference onRobot Learning , 2021. URL https://openreview.net/forum?id=8kbp23tSGYv .[3] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. In Proceedings of the 5th Conference on Robot Learning (CoRL) , 2021.[4] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, et al. Transporter networks: Rearranging the visual world for roboticmanipulation. In Conference on Robot Learning , pages 726–747. PMLR, 2021.[5] S. James, K. Wada, T. Laidlow, and A. J. Davison. Coarse-to-fine q-attention: Efficient learningfor visual robotic manipulation via discretisation. In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pages 13739–13748, 2022.[6] M. Shridhar, L. Manuelli, and D. Fox. Perceiver-Actor: A multi-task transformer for roboticmanipulation. In CoRL , 2022.[7] S. James, Z. Ma, D. R. Arrojo, and A. J. Davison. Rlbench: The robot learning benchmark &learning environment. IEEE Robotics and Automation Letters , 5(2):3019–3026, 2020.[8] A. Jaegle, F. Gimeno, A. Brock, O. Vinyals, A. Zisserman, and J. Carreira. Perceiver: Generalperception with iterative attention. In International conference on machine learning , pages4651–4664. PMLR, 2021.[9] O. M. Andrychowicz, B. Baker, M. Chociej, R. Jozefowicz, B. McGrew, J. Pachocki,A. Petron, M. Plappert, G. Powell, A. Ray, et al. Learning dexterous in-hand manipulation.The International Journal of Robotics Research , 39(1):3–20, 2020.[10] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V . Tsounis, V . Koltun, and M. Hutter. Learn-ing agile and dynamic motor skills for legged robots. Science Robotics , 4(26):eaau5872, 2019.[11] A. Kumar, Z. Fu, D. Pathak, and J. Malik. Rma: Rapid motor adaptation for legged robots.arXiv preprint arXiv:2107.04034 , 2021.[12] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[13] J. Xu, T. Du, M. Foshey, B. Li, B. Zhu, A. Schulz, and W. Matusik. Learning to fly: compu-tational controller design for hybrid uavs with reinforcement learning. ACM Transactions onGraphics (TOG) , 38(4):1–12, 2019.[14] P. Anderson, A. Chang, D. S. Chaplot, A. Dosovitskiy, S. Gupta, V . Koltun, J. Kosecka, J. Ma-lik, R. Mottaghi, M. Savva, et al. On evaluation of embodied navigation agents. arXiv preprintarXiv:1807.06757 , 2018.[15] A. Goyal and J. Deng. Packit: A virtual environment for geometric planning. In InternationalConference on Machine Learning , pages 3700–3710. PMLR, 2020.[16] D. Shah, B. Eysenbach, G. Kahn, N. Rhinehart, and S. Levine. Ving: Learning open-worldnavigation with visual goals. In 2021 IEEE International Conference on Robotics and Automa-tion (ICRA) , pages 13215–13222. IEEE, 2021.[17] R. Yang, M. Zhang, N. Hansen, H. Xu, and X. Wang. Learning vision-guided quadrupedal lo-comotion end-to-end with cross-modal transformers. arXiv preprint arXiv:2107.03996 , 2021.9[18] W. Huang, I. Mordatch, P. Abbeel, and D. Pathak. Generalization in dexterous manipulationvia geometry-aware multi-task learning. arXiv preprint arXiv:2111.03062 , 2021.[19] T. Chen, J. Xu, and P. Agrawal. A system for general in-hand object re-orientation. Conferenceon Robot Learning , 2021.[20] W. Yu, D. Jain, A. Escontrela, A. Iscen, P. Xu, E. Coumans, S. Ha, J. Tan, and T. Zhang. Visual-locomotion: Learning to walk on complex terrains with vision. In 5th Annual Conference onRobot Learning , 2021. URL https://openreview.net/forum?id=NDYbXf-DvwZ .[21] D. Shah, A. Bhorkar, H. Leen, I. Kostrikov, N. Rhinehart, and S. Levine. Offline reinforcementlearning for visual navigation. In 6th Annual Conference on Robot Learning , 2022. URLhttps://openreview.net/forum?id=uhIfIEIiWm_ .[22] D. Hafner, T. Lillicrap, J. Ba, and M. Norouzi. Dream to control: Learning behaviors by latentimagination. arXiv preprint arXiv:1912.01603 , 2019.[23] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locallylinear latent dynamics model for control from raw images. Advances in neural informationprocessing systems , 28, 2015.[24] D. Yarats, I. Kostrikov, and R. Fergus. Image augmentation is all you need: Regularizing deepreinforcement learning from pixels. In International Conference on Learning Representations ,2021. URL https://openreview.net/forum?id=GY6-6sTvGaf .[25] D. Yarats, R. Fergus, A. Lazaric, and L. Pinto. Mastering visual continuous control: Improveddata-augmented reinforcement learning. In International Conference on Learning Representa-tions , 2022. URL https://openreview.net/forum?id=_SJ-_yyes8 .[26] A. Brohan, N. Brown, J. Carbajal, Y . Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus-man, A. Herzog, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, T. Jackson, S. Jesmonth, N. Joshi, R. Ju-lian, D. Kalashnikov, Y . Kuang, I. Leal, K.-H. Lee, S. Levine, Y . Lu, U. Malla, D. Manjunath,I. Mordatch, O. Nachum, C. Parada, J. Peralta, E. Perez, K. Pertsch, J. Quiambao, K. Rao,M. Ryoo, G. Salazar, P. Sanketi, K. Sayed, J. Singh, S. Sontakke, A. Stone, C. Tan, H. Tran,V . Vanhoucke, S. Vega, Q. Vuong, F. Xia, T. Xiao, P. Xu, S. Xu, T. Yu, and B. Zitkovich. Rt-1: Robotics transformer for real-world control at scale. In arXiv preprint arXiv:2212.06817 ,2022.[27] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[28] Y .-C. Lin, P. Florence, A. Zeng, J. T. Barron, Y . Du, W.-C. Ma, A. Simeonov, A. R. Garcia, andP. Isola. Mira: Mental imagery for robotic affordances. In K. Liu, D. Kulic, and J. Ichnowski,editors, Proceedings of The 6th Conference on Robot Learning , volume 205 of Proceedings ofMachine Learning Research , pages 1916–1927. PMLR, 14–18 Dec 2023.[29] I. Lenz, H. Lee, and A. Saxena. Deep learning for detecting robotic grasps. The InternationalJournal of Robotics Research , 34(4-5):705–724, 2015.[30] A. Saxena, J. Driemeyer, J. Kearns, and A. Ng. Robotic grasping of novel objects. Advancesin neural information processing systems , 19, 2006.[31] M. P. Deisenroth, P. Englert, J. Peters, and D. Fox. Multi-task policy search for robotics. In2014 IEEE international conference on robotics and automation (ICRA) , pages 3876–3881.IEEE, 2014.[32] D. Kalashnikov, J. Varley, Y . Chebotar, B. Swanson, R. Jonschkowski, C. Finn, S. Levine, andK. Hausman. Mt-opt: Continuous multi-task robotic reinforcement learning at scale. arXivpreprint arXiv:2104.08212 , 2021.10[33] C. Devin, A. Gupta, T. Darrell, P. Abbeel, and S. Levine. Learning modular neural networkpolicies for multi-task and multi-robot transfer. In 2017 IEEE international conference onrobotics and automation (ICRA) , pages 2169–2176. IEEE, 2017.[34] Y . Teh, V . Bapst, W. M. Czarnecki, J. Quan, J. Kirkpatrick, R. Hadsell, N. Heess, and R. Pas-canu. Distral: Robust multitask reinforcement learning. Advances in neural information pro-cessing systems , 30, 2017.[35] M. Ahn, A. Brohan, N. Brown, Y . Chebotar, O. Cortes, B. David, C. Finn, C. Fu, K. Gopalakr-ishnan, K. Hausman, A. Herzog, D. Ho, J. Hsu, J. Ibarz, B. Ichter, A. Irpan, E. Jang, R. J.Ruano, K. Jeffrey, S. Jesmonth, N. Joshi, R. Julian, D. Kalashnikov, Y . Kuang, K.-H. Lee,S. Levine, Y . Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K. Rao, J. Rettinghouse, D. Reyes,P. Sermanet, N. Sievers, C. Tan, A. Toshev, V . Vanhoucke, F. Xia, T. Xiao, P. Xu, S. Xu,M. Yan, and A. Zeng. Do as i can and not as i say: Grounding language in robotic affordances.InarXiv preprint arXiv:2204.01691 , 2022.[36] A. Goyal, K. Yang, D. Yang, and J. Deng. Rel3d: A minimally contrastive benchmark forgrounding spatial relations in 3d. Advances in Neural Information Processing Systems , 33:10514–10525, 2020.[37] C. Lynch and P. Sermanet. Language conditioned imitation learning over unstructured data.arXiv preprint arXiv:2005.07648 , 2020.[38] O. Mees, L. Hermann, and W. Burgard. What matters in language conditioned robotic imitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[39] S. Nair, E. Mitchell, K. Chen, S. Savarese, C. Finn, et al. Learning language-conditioned robotbehavior from offline data and crowd-sourced annotation. In Conference on Robot Learning ,pages 1303–1315. PMLR, 2022.[40] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez,Y . Sulsky, J. Kay, J. T. Springenberg, et al. A generalist agent. arXiv preprintarXiv:2205.06175 , 2022.[41] I. Singh, V . Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA. Garg. Progprompt: Generating situated robot task plans using large language models. ICRA ,2022.[42] D. S. Chaplot, D. Pathak, and J. Malik. Differentiable spatial planning using transformers. InInternational Conference on Machine Learning , pages 1484–1495. PMLR, 2021.[43] H. M. Clever, A. Handa, H. Mazhar, K. Parker, O. Shapira, Q. Wan, Y . Narang, I. Akinola,M. Cakmak, and D. Fox. Assistive tele-op: Leveraging transformers to collect robotic taskdemonstrations. arXiv preprint arXiv:2112.05129 , 2021.[44] J. J. Johnson, L. Li, A. H. Qureshi, and M. C. Yip. Motion planning transformers: One modelto plan them all. arXiv preprint arXiv:2106.02791 , 2021.[45] C. Chi, S. Feng, Y . Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy:Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023.[46] T. Z. Zhao, V . Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulationwith low-cost hardware. arXiv preprint arXiv:2304.13705 , 2023.[47] T. Cachet, J. Perez, and S. Kim. Transformer-based meta-imitation learning for robotic manip-ulation. In Neural Information Processing Systems, Workshop on Robot Learning , 2020.11[48] H. Kim, Y . Ohmura, and Y . Kuniyoshi. Transformer-based deep imitation learning for dual-arm robot manipulation. In 2021 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 8965–8972. IEEE, 2021.[49] S. Dasari and A. Gupta. Transformers for one-shot visual imitation. In Conference on RobotLearning , pages 2071–2084. PMLR, 2021.[50] R. Jangir, N. Hansen, S. Ghosal, M. Jain, and X. Wang. Look closer: Bridging egocentric andthird-person views with transformers for robotic manipulation. IEEE Robotics and AutomationLetters , 7(2):3046–3053, 2022.[51] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[52] A. Goyal, H. Law, B. Liu, A. Newell, and J. Deng. Revisiting point cloud shape classificationwith a simple and effective baseline. In International Conference on Machine Learning , pages3809–3820. PMLR, 2021.[53] A. Hamdi, S. Giancola, and B. Ghanem. Mvtn: Multi-view transformation network for 3dshape recognition. In Proceedings of the IEEE/CVF International Conference on ComputerVision , pages 1–11, 2021.[54] A. Hamdi, S. Giancola, and B. Ghanem. V oint cloud: Multi-view point cloud representationfor 3d understanding. arXiv preprint arXiv:2111.15363 , 2021.[55] S. Huang, Y . Chen, J. Jia, and L. Wang. Multi-view transformer for 3d visual grounding. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages15524–15533, 2022.[56] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf:Representing scenes as neural radiance fields for view synthesis. Communications of the ACM ,65(1):99–106, 2021.[57] V . Guizilini, I. Vasiljevic, J. Fang, R. Ambru, G. Shakhnarovich, M. R. Walter, and A. Gaidon.Depth field networks for generalizable multi-view scene representation. In European Confer-ence on Computer Vision , pages 245–262. Springer, 2022.[58] E. Johns. Coarse-to-fine imitation learning: Robot manipulation from a single demonstration.In2021 IEEE international conference on robotics and automation (ICRA) , pages 4613–4619.IEEE, 2021.[59] N. Ravi, J. Reizenstein, D. Novotny, T. Gordon, W.-Y . Lo, J. Johnson, and G. Gkioxari. Ac-celerating 3d deep learning with pytorch3d. arXiv:2007.08501 , 2020.[60] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[61] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. De-hghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transform-ers for image recognition at scale. arXiv preprint arXiv:2010.11929 , 2020.[62] E. Rohmer, S. P. Singh, and M. Freese. V-rep: A versatile and scalable robot simulationframework. In 2013 IEEE/RSJ international conference on intelligent robots and systems ,pages 1321–1326. IEEE, 2013.[63] G. S ́anchez and J.-C. Latombe. A single-query bi-directional probabilistic roadmap plannerwith lazy collision checking. In Int. Symp. Robotics Research , pages 403–417, 2001.12[64] I. A. Sucan, M. Moll, and L. E. Kavraki. The open motion planning library. IEEE Robotics &Automation Magazine , 19(4):72–82, 2012.[65] Y . You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer,and C.-J. Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. arXivpreprint arXiv:1904.00962 , 2019.[66] K. Zhang, M. Sharma, J. Liang, and O. Kroemer. A modular robotic arm control stack forresearch: Franka-Interface and FrankaPy. arXiv preprint arXiv:2011.02398 , 2020.13A AppendixA.1 RLBench TasksWe provide a brief summary of the RLBench tasks in Tab. A1. There are 18 tasks with 249 variations.For more detailed description of each task, please refer to PerAct [6], Appendix A.Task Language Template # of Variationsopen drawer “open the drawer” 3slide block “slide the block to target” 4sweep to dustpan “sweep dirt to the dustpan” 2meat off grill “take the off the grill” 2turn tap “turn tap” 2put in drawer “put the item in the drawer” 3close jar “close the jar” 20drag stick “use the stick to drag the cube onto the target” 20stack blocks “stack blocks” 60screw bulb “screw in the light bulb” 20put in safe “put the money away in the safe on the shelf” 3place wine “stack the wine bottle to the of the rack” 3put in cupboard “put the in the cupboard” 9sort shape “put the in the shape sorter” 5push buttons “push the button, [then the button]” 50insert peg “put the peg in the spoke” 20stack cups “stack the other cups on top of the cup” 20place cups “place cups on the cup holder” 3Table A1: Tasks in RLBench We evaluate on 18 RLBench tasks which are same as those usedin PerAct [6]. For more details, check see PerAct [6], Appendix A. For videos, visit https://robotic-view-transformer.github.io/A.2 Additional ExperimentsExperiments on dataset with input cameras in orthogonal configuration. We created a new im-age dataset for RLBench with input cameras arranged in orthogonal configuration like in Fig. 3a.Note that this dataset is not provided in PerAct. With this new image dataset, we did two experi-ments, one directly using input camera images as input and one with our pipeline (re-rendering withorthographic projection, 3D augmentation and correspondences). We find that results on this newdataset are consistent with the results on the dataset provided by PerAct, where our pipeline worksbetter (60.0% vs 27.2%) likely because it allows for orthographic projection, 3D augmentation, andpoint correspondence (Table 2 Left).How does the quality of input point cloud affect performance of RVT? To investigate howthe quality of the point cloud affects performance in RVT, we do experiments in simulation.Specifically, we add Gaussian noise with varying standard deviation (2.5mm, 5mm, 1cm, 2cm,and 4cm) to the original point cloud. We add this noise both during training and evalua-tion to simulate sensor noise in both phases. The success rate is 62.9 for no noise, 62.0 for2.5mm, 61.6 for 5mm, 56.4 for 1cm, 58.7 for 2cm and 51.7 for 4cm standard deviation noise.We find that RVT is robust to 2cm standard deviation noise in the point cloud and its perfor-mance degrades gracefully with more noise. For reference, the depth measurements in IntelRealSense D400 camera has an error of 2.5mm to 5mm for an object at 1m from the cam-era (source: https://www.intel.com/content/www/us/en/support/articles/000026260/emerging-technologies/intel-realsense-technology.html )14A.3 Additional ExplanationHow are the heatmaps of multiple virtual views back-projected to 3D? To calculate the heatmapvalue at a 3D location, we map 3D points to 2D pixels in the virtual views. We consider not just thepoints in the point cloud, but all points in the 3D scene bounds that are distributed at a resolution of(h×h×h), where ( h×h) is the resolution of the virtual image. For each point, the heatmap valuefrom multiple views are averaged.Why only using input images (without re-renderning) voids the gains from using orthographicprojections, 3D augmentation, and adding correspondences? As we see in Table 2 Left, or-thographic projections, 3D augmentation and adding xyz image (correspondence) improve perfor-mance. However, these could only be added after re-rendering, because: First, real-world camerasgenerally provide only perspective projection and not orthographic projection. To obtain ortho-graphic projection, re-rendering is needed. Second, effects of 3D augmentation like rotation of theobject cannot be trivially simulated in the image without re-rendering. We first create a 3D represen-tation, apply the augmentation and re-render. Finally, adding xyz image between points in imagesfirst requires explicitly building the 3D point cloud from images, and rendering the xyz image.In the real robot experiments, only one camera view is used. How carefully does this viewhave to be selected so that the method performs well? Would the performance of the methodimprove if more cameras are placed in the workspace? We used a standard third person view thatis in front of the robot. We ensured that the workspace is visible in the camera but no particular effortwas put in adjusting the view to make the method perform well. Potentially having more camerascould improve the method.PerAct extracts a set of keyframe actions from the demo by capturing bottleneck end-effectorposes in the action sequence that has (1) near-zero joint velocities or (2) an unchanged gripperopen state. It seems from Sec 4.2 (Data Collection) that the keyframe actions are specifiedin a different way in RVT. We follow the same pipeline as PerAct to extract keyframes fromdemonstrations. Sec 4.2 (Data Collection) describes our scheme for collecting target poses for realworld demonstrations. The difference is that PerAct uses a VR controller to specify target poseswhile we do so by kinesthetically moving the robot. Once a real-world demonstration is collected,the process of extracting keyframes is the same.15A.4 RVT OverviewInsert peg in the blue spokeVirtual Image 1Virtual Image 2Virtual Image 5PatchifyProjectionAttention X 4Attention X 4Rotation0101Gripper CollisionCLIPMLPFigure A1: Overview of the transformer used in RVT. The input to the transformer is a languagedescription of the task and virtual images of the scene point cloud. The text is converted into tokenembeddings using the pretrained CLIP [60] model, while the virtual images are converted into tokenembeddings via patchify and projection operations. For each virtual image, tokens belonging to thesame image are processed via four attention layers. Finally, the processed image tokens as well asthe language tokens are jointly processed using four attention layers. The 3D action is inferred usingthe resulting image tokens.16A.5 AblationsWe report the ablations mentioned in Tab. 2, along with the mean and standard deviations for eachtask in Tab. A2.Im. View Dep. Bi- Proj. Rot. Cam # of Avg. Close Drag Insert Meat off Open PlaceRes. Corr. Ch. Lev. Type Aug. Loc. View Succ. Jar Stick Peg Grill Drawer Cups220 ✓ ✓ ✓ Orth. ✓ Cube 5 62.9 52±2.5 99.2±1.6 11.2±3 88±2.5 71.2±6.9 4±2.5100 ✓ ✓ ✓ Orth. ✓ Cube 5 51.1 60±0 83±1.7 4±2.8 91±3.3 67±5.2 1±1.7220 ✗ ✓ ✓ Orth. ✓ Cube 5 59.7 44 ±0 100±0 17±4.4 90±6 71±9.1 7±5.9220 ✓ ✗ ✓ Orth. ✓ Cube 5 60.3 37±3.3 96±0 11±3.3 97±1.7 57±8.2 3±3.3220 ✓ ✓ ✗ Orth. ✓ Cube 5 58.4 32 ±7.5 96±0 11±3.3 90±2 68±2.8 2±2220 ✓ ✓ ✓ Pers. ✓ Cube 5 40.2 20±2.5 90.4±2 4±0 84.8±4.7 13.6±4.8 2.4±2220 ✓ ✓ ✓ Orth. ✗ Cube 5 60.4 52 ±0 92±0 12.8±1.6 97.6±4.8 85.6±5.4 0±0220 ✓ ✓ ✓ Orth. ✓ Cube 3 60.2 44.8±1.6 75.2±4.7 15±3.3 89.6±4.1 68.8±9.3 3.2±1.6220 ✓ ✓ ✓ Orth. ✓ Front 1 35.8 36 ±4.9 87±1.7 2±2 90±6 58±6.6 0±0220 ✓ ✓ ✓ Orth. ✓ Rot. 15 5 59.9 48.8±1.6 99.2±1.6 12±4.4 80±2.5 71.2±9.3 0±0220 ✓ ✓ ✓ Pers. ✗ Real 4 10.4 14.4 ±6.5 14.4±5.4 0±0 0±0 22.4±5.4 0±0220 ✓ ✓ ✓ Ortho. ✗ Real 4 22.9 43.2±4.7 54.4±3.2 0±0 0±0 15.2±5.3 0.8±1.6Im. View Dep. Bi- Proj. Rot. Cam # of Avg. Place Push Put in Put in Put in ScrewRes. Corr. Ch. Lev. Type Aug. Loc. View Succ. Wine Buttons Cupboard Drawer Safe Bulb220 ✓ ✓ ✓ Orth. ✓ Cube 5 62.9 91±5.2 100±0 49.6±3.2 88±5.7 91.2±3 48±5.7100 ✓ ✓ ✓ Orth. ✓ Cube 5 51.1 38±8.7 100±0 49±4.4 86±2 77±1.7 22±4.5220 ✗ ✓ ✓ Orth. ✓ Cube 5 59.7 96 ±2.8 99±1.7 48±6.9 50±6 79±5.9 36±0220 ✓ ✗ ✓ Orth. ✓ Cube 5 60.3 71±1.7 99±1.7 56±0 92±4.9 77±3.3 39±4.4220 ✓ ✓ ✗ Orth. ✓ Cube 5 58.4 65 ±5.2 100±0 54±2 94±4.5 78±3.5 48±6.3220 ✓ ✓ ✓ Pers. ✓ Cube 5 40.2 28±5.7 91.2±1.6 26.4±2 64.8±3 51.2±3.9 20±4.4220 ✓ ✓ ✓ Orth. ✗ Cube 5 60.4 84 ±3.6 96±2.5 40±2.5 88±7.2 90.4±4.1 48±8.4220 ✓ ✓ ✓ Orth. ✓ Cube 3 60.2 84.8±8.9 97.6±2 40.8±4.7 94.4±4.1 82.4±7.8 43.2±3.9220 ✓ ✓ ✓ Orth. ✓ Front 1 35.8 82 ±4.5 46±2 14±4.5 29±7.1 57±5.9 6±2220 ✓ ✓ ✓ Orth. ✓ Rot. 15 5 59.9 74.4±5.4 99.2±1.6 46.4±4.1 81.6±2 80.8±4.7 45.6±4.8220 ✓ ✓ ✓ Pers. ✗ Real 4 10.4 11.2 ±3.9 26.4±4.1 0±0 0±0 0±0 0±0220 ✓ ✓ ✓ Ortho. ✗ Real 4 22.9 67.2±5.9 76±5.7 0±0 0±0 0±0 0±0Im. View Dep. Bi- Proj. Rot. Cam # of Avg. Slide Sort Stack Stack Sweep to TurnRes. Corr. Ch. Lev. Type Aug. Loc. View Succ. Block Shape Blocks Cups Dustpan Tap220 ✓ ✓ ✓ Orth. ✓ Cube 5 62.9 81.6±5.4 36±2.5 28.8±3.9 26.4±8.2 72±0 93.6±4.1100 ✓ ✓ ✓ Orth. ✓ Cube 5 51.1 93±3.3 18±2 17±5.2 1±1.7 36±0 76±2.8220 ✗ ✓ ✓ Orth. ✓ Cube 5 59.7 83 ±1.7 41±4.4 26.7±5 20±4.9 72±0 95±4.4220 ✓ ✗ ✓ Orth. ✓ Cube 5 60.3 72±4 37±5.2 23±3.3 33±5.9 92±0 95±4.4220 ✓ ✓ ✗ Orth. ✓ Cube 5 58.4 66 ±6 31±6.6 25±3.3 29±5.2 72±0 91±3.3220 ✓ ✓ ✓ Pers. ✓ Cube 5 40.2 88±4.4 19.2±4.7 22.4±9 1.6±2 16±0 80.8±3220 ✓ ✓ ✓ Orth. ✗ Cube 5 60.4 72.8 ±1.6 25.6±2 18.4±6 8.8±5.3 84±0 92±2.5220 ✓ ✓ ✓ Orth. ✓ Cube 3 60.2 95.2±1.6 37.6±4.1 29.6±3.2 8.8±4.7 80±0 92.8±3220 ✓ ✓ ✓ Orth. ✓ Front 1 35.8 42 ±2 2±2 0±0 0±0 0±0 93±5.2220 ✓ ✓ ✓ Orth. ✓ Rot. 15 5 59.9 83±1.7 30.4±5.4 46.4±9.3 20.8±4.7 64±0 94.4±3.2220 ✓ ✓ ✓ Pers. ✗ Real 4 10.4 37.6 ±10.6 2.4±3.2 0.8±1.6 0±0 0±0 56.8±6.9220 ✓ ✓ ✓ Ortho. ✗ Real 4 22.9 72.8±3 7.2±1.6 11.2±4.7 0±0 12±0 53±5.2Table A2: Ablations results for RVT on RLBench with metrics for each task.17 |
cjEI5qXoT0 | Context-Aware Entity Grounding withOpen-Vocabulary 3D Scene GraphsHaonan Chang1, Kowndinya Boyalakuntla1, Shiyang Lu1, Siwei Cai2, Eric Pu Jing1, Shreesh Keskar1Shijie Geng1, Adeeb Abbas2, Lifeng Zhou2, Kostas Bekris1, Abdeslam Boularias11hc856,kb1204,sl1642,epj25,skk139,sg1309,kb572,ab1544@scarletmail.rutgers.edu2sc3568@drexel.edu,adeebabbs@gmail.com,lz457@drexel.edu1Rutgers University2Drexel UniversityAbstract: We present an Open-Vocabulary 3D Scene Graph (OVSG), a formalframework for grounding a variety of entities, such as object instances, agents,and regions, with free-form text-based queries. Unlike conventional semantic-based object localization approaches, our system facilitates context-aware entitylocalization, allowing for queries such as “pick up a cup on a kitchen table” or“navigate to a sofa on which someone is sitting”. In contrast to existing researchon 3D scene graphs, OVSG supports free-form text input and open-vocabularyquerying. Through a series of comparative experiments using the ScanNet [1]dataset and a self-collected dataset, we demonstrate that our proposed approachsignificantly surpasses the performance of previous semantic-based localizationtechniques. Moreover, we highlight the practical application of OVSG in real-world robot navigation and manipulation experiments. The code and dataset usedfor evaluation can be found at https://github.com/changhaonan/OVSG.Keywords: Open-V ocabulary Semantics, Scene Graph, Object Grounding1 IntroductionIn this paper, we aim to address a fundamental problem in robotics – grounding semantic entitieswithin the real world. Specifically, we explore how to unambiguously and accurately associateentities present in commands, such as object manipulation, navigation to a specific location, orcommunication with a particular user.Currently, the prevailing method for grounding entities in the robotics domain is semantic detec-tion [2]. Semantic detection methods are intuitive and stable. However, in scenes with multiple en-tities of the same category, semantic labels alone cannot provide a unique specification. In contrast,humans naturally possess the ability to overcome this grounding ambiguity by providing context-aware specifications, such as detailed descriptions and relative relations. For example, rather thansimply designating “a cup”, humans often specify “a blue cup on the shelf”, “a coffee cup in thekitchen”, or “Mary’s favorite tea cup”.Inspired by this, a series of recent works introduce context relationship into grounding problem [3,4, 5, 6, 7]. These approaches employ 3D scene graphs as a scene representation that concurrentlyaccounts for instance categories and inter-instance spatial contexts. In a 3D scene graph, conceptssuch as people, objects, and rooms are depicted as nodes, with attributes like color, material, andaffordance assigned as node attributes. Moreover, spatial relationships are represented as graphedges. Such structure enables 3D scene graphs to seamlessly support context-aware object queries,7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.such as “the red cup on the table in the dining room”, provided that the attribute, the semanticcategory, and the relationship have been predefined in the graph.However, this inevitably brings us to a more crucial question that this paper aims to answer: howdo we cope with scenarios when the class category, relationship, and attribute are not available inthe constructed 3D scene graph? Tackling this question is vital if we wish to effectively integraterobots into real-world scenarios. To resolve the challenge, we present a novel framework in thispaper – the Open-V ocabulary 3D Scene Graph (OVSG). To the best of our knowledge, OVSG is thefirst 3D scene graph representation that facilitates context-aware entity grounding, even with unseensemantic categories and relationships.To evaluate the performance of our proposed system, we conduct a series of query experiments onScanNet [1], ICL-NUIM [8], and a self-collected dataset DOVE-G ( Dataset for Open-VocabularyEntity Grounding). We demonstrate that by combining open-vocabulary detection with 3D scenegraphs, we can ground entities more accurately in real-world scenarios than using the state-of-the-artopen-vocabulary semantic localization method alone. Additionally, we designed two experimentsto investigate the open-vocabulary capability of our framework. Finally, we showcase potentialapplications of OVSG through demonstrations of real-world robot navigation and manipulation.Our contributions are threefold: 1) A new dataset containing eight unique scenarios and 4,000 lan-guage queries for context-aware entity grounding. 2) A novel 3D scene graph-based method toaddress the context-aware entity grounding from open-vocabulary queries. 3) Demonstrate the real-world applications of OVSG, such as context-aware object navigation and manipulation.2 Related WorkOpen-Vocabulary Semantic Detection and Segmentation The development of foundation vision-language pre-trained models, such as CLIP [9], ALIGN [10], and LiT [11], has facilitated theprogress of 2D open-vocabulary object detection and segmentation techniques [12, 13, 14, 15, 16,17, 18]. Among these approaches, Detic [16] stands out by providing open-vocabulary instance-level detection and segmentation simultaneously. However, even state-of-the-art single-frame meth-ods like Detic suffer from perception inconsistency due to factors such as view angle, image quality,and motion blur. To address these limitations, Lu et al. proposed OVIR-3D [19], a method that fusesthe detection result from Detic into an existing 3D model using 3D global data association. Afterfusion, the 3D scene is segmented into multiple instances, each with a unique Detic feature attached.Owing to its stable performance, we choose OVIR-3D as our semantic backbone.Vision Language Object Grounding In contrast with object detection and segmentation, objectgrounding focuses on pinpointing an object within a 2D image or a 3D scene based on textual input.In the realm of 2D grounding, various studies, such as [20, 21, 22, 23], leverage vision-languagealignment techniques to correlate visual and linguistic features. In the 3D context, object groundingis inherently linked to the challenges of robot navigation, thus gaining significant attention from therobotics community. For instance, CoWs [24] integrates a CLIP gradient detector with a navigationpolicy for effective zero-shot object grounding. More recently, NLMap [25], ConceptFusion [26],CLIP-Fields [27] opts to incorporate pixel-level open-vocabulary features into a 3D scene recon-struction, resulting in a queryable scene representation. While NLMap overlooks intricate relation-ships in their framework, ConceptFusion claims to be able query objects from long text input withunderstanding of the object context. Thus, we include ConceptFusion as one of our baselines for 3Dvision-language grounding.3D Scene Graph 3D scene graphs provide an elegant representation of objects and their rela-tionships, encapsulating them as nodes and edges, respectively. The term “3D” denotes that eachnode within the scene possesses a three-dimensional position. Such graph structure has been widelyresearched for decades in robotics community [3, 4, 5, 7, 6, 28, 29],. In [3], Fisher et al. first in-troduced the concept of 3D scene graphs, where graph nodes are categorized by geometric shapes.Armeni et al. [4] and Kim et al. [5] then revisited this idea by incorporating semantic labels to graph2Figure 1: This is an illustration of the proposed pipeline. The system inputs are the positional input Pu, user input Lu, RGBD Scan I, and a query language Lq.The top section depicts the construction of Gs. BothPuandLuare directly fed into Gs. The RGBD Scan Iinputs into the open-vocabulary fusion system referredto as OVIR-3D . This system outputs a position and a Detic feature for each object. Subsequently, the language descriptions for the agent and region are converted intofeatures via different encoders. A unique Spatial Relationship Encoder is employed to encode spatial relationship features from pose pairs. The bottom section showsthe building of Gq. The query Lqused in this example is, “I want to find Tom’s bottle in the laboratory.” An LLM is used to parse it into various elements, eachwith a description and type. These descriptions are then encoded into features by different encoders, forming Gq. Finally, grounding the query language LqwithinsceneSbecomes a problem of locating GqwithinGs. A candidate proposal and ranking algorithm are introduced for this purpose. The entity we wish to locate isrepresented by the central node of the selected candidate.nodes. These works establish a good foundation for semantic-aware 3D scene graphs, demonstratingthat objects, rooms, and buildings can be effectively represented as graph nodes. Recently, Wald etal. [7] showed that 3D feature extraction and graph neural networks (GNN) can directly infer seman-tic categories and object relationships from raw 3D point clouds. Rosinol et al. [6] further includeddynamic entities, such as users, within the scope of 3D scene graph representation. While 3D scenegraphs exhibit great potential in object retrieval and long-term motion planning, none of the existingmethods support open-vocabulary queries and direct natural language interaction. Addressing theselimitations is crucial for real-world deployment, especially for enabling seamless interaction withusers.3 Open-Vocabulary 3D Scene Graph3.1 Open-Vocabulary 3D Scene Graph RepresentationAn Open-V ocabulary 3D Scene Graph (OVSG) is denoted as G=|V, E|, where Vsignifiesgraph nodes and Estands for graph edges. Each node viinV={vi}={ti, fi, li, pi}con-sists of a node type ti, a open-vocabulary feature fi, a language description li(optional), anda 3D position pi(optional); iis the node index. In this study, we identify three primary nodetypes, ti: object, agent, and region. The open-vocabulary feature fiassociated with each nodeviis contingent on the node type ti. The encoder utilized for fiis accordingly dependent onti. The 3D position pi={xc, yc, zc, xmin, ymin, zmin, xmax, ymax, zmax}of each entity is de-fined by a 3D bounding box and its center position. Edges in the graph are represented byE={ei,j|vi, vj∈V}, ei,j={ri,j,k={ti,j,k, fi,j,k, li,j,k}|k= 0, . . .}. Each edge ei,jen-capsulates all relationships ri,j,kbetween the nodes viandvj. The triplet notation (i, j, k )refers thekthrelationship between node viandvj,ti,j,kindicates the type of this relationship. We primarilycategorize two relationships in this study: spatial relations and abstract relationships. A short sen-tence liis optionally provided to describe this relationship. The feature fi,j,kencodes the semanticmeaning of the relationship, whose encoder depends on ti,j,k. For a more detailed definition of thesetypes, please refer to Section 3.3.The primary distinction of OVSG from conventional 3D scene graph work is its utilization of se-mantic features, instead of discrete labels, to characterize nodes and relationships. These featuresare either directly trained within the language domain like Sentence-BERT [30] and GloVe [31], oraligned to it, as seen with CLIP [9] and Detic [16]. The versatility of language features enables3OVSG to handle diverse queries. The degree of similarity among nodes and edges is depicted usinga distance metric applied to their features:dist(vi, vj) =∞ ifti̸=tj1−dot(fi, fj)else;dist(ei,j, eu,v) = min∀k∈|ei,j|,∀w∈|eu,v|dist(ri,j,k, ru,v,w)dist(ri,j,k, ru,v,w) =∞ ifti,j,k̸=tu,v,w1−dot(fi,j,k, fu,v,w)ifti,j,k=tu,v,w̸=spatialSRP(fi,j,k, fu,v,w) ifti,j,k=tu,v,w=spatial(1), where the |ei,j|and|eu,v|are the number of relationships inside ei,jandeu,v;SRP refers to aSpatial Relationship Predictor. Check Section 3.3 and Appendix B for more details. Noticeably,the distance across different types will not be directly compared. These distances will be used tocompute the type-free index in Section 3.4.3.2 Context-Aware Open-Vocabulary Entity GroundingThe problem we address can be formally defined using the open-vocabulary scene graph conceptas follows: Given a scene, represented as S, our objective is to localize an entity, referred to as s,using natural language, represented as Lq, within the context of the scene S. Essentially, we seek toestablish a mapping πsuch that s=π(Lq|S). An RGBD scan of the scene I, user linguistic inputLu, and position input Puare provided to facilitate this process. Significantly, the query languageLqmay encompass entity types and relationship descriptions not previously included in the scenegraph construction phase.Our proposed procedure can be separated into two main stages. The first stage involves the con-struction of the scene graph. From the user input Luand the RGBD scan I, we construct an open-vocabulary scene graph (OVSG) for the entire scene, denoted as Gs. This is a one-time process thatcan be reused for every subsequent query. When a new query is introduced, we also construct anOVSG using the query Lq, denoted as Gq. Once we have both scene graphs GsandGq, we proceedto the second stage, which is the graph matching stage. Here, we match the query scene graph, Gq,with a sub-graph from the whole scene graph, Gs. The queried entity is situated within the matchedsub-graph.3.3 3D Scene Graph BuildingType definition Prior to delving into the scene graph construction procedure, we first delineate thecategories of node types and edge types this paper pertains to. The term Object signifies static ele-ments within a scene, such as sofas, tables, and so forth. The term Agent is attributed to dynamic,interactive entities in the scene, which could range from humans to robots. Region indicates a spe-cific area, varying in scale from the surface of a tabletop to an entire room or building. Regardingrelationships, spatial describes positional relationships between two entities, such as Tom being inthe kitchen. Conversely, abstract relationships are highly adaptable, enabling us to elucidate rela-tionships between an agent and an object (for instance, a cup belonging to Mary) or the affordancerelationship between two objects, such as a key being paired with a door.Input process The inputs for Gsconsist of an RGBD-scan set I, a user language input Lu, and auser position input Pu. The Luinput assigns names to agents and regions and provides descriptionsof abstract relationships. Puprovides the locations for the agent and region (not including objectposition), and it can be autonomously generated using existing algorithms like DSGS [6]. Since thisprocess is not the focus of our study, we assume Puis pre-determined in this paper. The input Iisan RGBD scan of the entire scene, which is fed into the Open-Vocabulary 3D I nstance Retrieval(OVIR-3D) [19] system, a fusion system operating at the instance level. OVIR-3D returns a set ofobjects, each denoted by a position piand a Detic feature fiDetic .Gqaccepts a language query Lqas its input. An exemplary query, as depicted in Figure 1, is “I wantto find Tom’s bottle in laboratory”. To parse this language, we utilize a large language model (LLM),4such as GPT-3.5 or LLAMA. Utilizing a meticulously engineered prompt (refer to Appendix A formore details), we can interpret different entities within the query.Feature encoding As specified in Eq. 1, the calculation of the similarity between nodes and edgesrelies heavily on their features. This operation of computing features is termed the feature encodingprocess.Instead of using a unified encoder as in previous works [25, 26], we choose different encoders forvarious node and relationship types. Since the inputs of GsandGqdiffer, the selection of encodersfor each graph also varies. Object features in Gsare generated by deploying OVIR-3D to the 3Dscan of the scene. These features are Detic features. Meanwhile, objects in Gsare encoded fromtheir names l(parsed from LLM during the input process) using the CLIP-text encoder. Because theDetic feature is directly trained to align with the CLIP-text feature, we can compute distances forobject nodes between GsandGqusing Eq.1. For agent and region nodes in Gs, they are identifiedby their names in the user input, Lu. Whereas in Gq, agent and region nodes are also specified bynames l. For both of them, we employ Sentence-BERT [30] to encode the language features. Asfor relationships, we differentiate between spatial relationships and abstract relationships. In Gs,the input for spatial relationships comes from the positions of the corresponding nodes. In contrast,inGq, the input for spatial relationships comes from language descriptions lparsed from LqbyLLM. Given the absence of a standardized approach for spatial-language encoding, we trained aspatial encoder for this purpose (see Appendix B). Finally, for abstract relationship features in Gs,the input is language lfrom user input, Lu. InGq, the input is also textual. We use GloVe to encodethese texts on both sides.Multiple distinct encoders are utilized during the feature encoding step. Different encoders have var-ied emphases, and using a combination can improve the robustness of OVSG. For instance, GloVeis trained to be sensitive to nuances like sentiment, while Sentence-BERT is not. Therefore, we useGloVe for abstract relationships to better distinguish relationships such as “like” and “dislike”. Con-versely, while GloVe does have a predefined vocabulary list, Sentence-BERT does not. Hence, forencoding the names of agents and regions, we prefer Sentence-BERT. Moreover, OVSG is designedwith a modularized structure, allowing future developers to easily introduce new types and featureencoders into OVSG.3.4 Sub-graph MatchingSubsequent to the phases of input processing and feature encoding, two OVSG representations areconstructed: one for the scene and another for the query, denoted by GsandGqrespectively. Theproblem of grounding Lqwithin the scene Scan be converted now effectively translates to locatingGqwithin Gs. Generally, the subgraph-matching problem is NP-hard, prompting us to make severalassumptions to simplify this problem. In this study, we assume that our Gqis a star graph, signifyingthat a central node exists and all other nodes are exclusively linked to this central node. (If Gqis nota star-graph, we will extract a sub-star-graph from it, and use this sub-graph as our query graph.)The pipeline of sub-graph matching is illustrated on the right side of Figure 1. This a two-stepprocedure: candidate proposal and re-ranking. Let’s denote the center of Gqasvcq. Initially, wetraverse all nodes, vis, inVs, ranking them based on their distance to vcq, computed with Eq. 1.Subsequently, we extract the local subgraph, Gis, surrounding each candidate, vis. These extractedsubgraphs serve as our candidate subgraphs. In the second phase, we re-rank these candidates us-ing a graph-similarity metric, τ(Gq, Gis). To evaluate graph similarity, we examine three distinctmethodologies: Likelihood, Jaccard coefficient, and Szymkiewicz-Simpson index.Likelihood Assuming the features of nodes and edges all originate from a normal distribu-tion, we can define the likelihood of nodes and edges being identical as follows: L(vi, vj) =exp(−dist(fi,fj)σv)for nodes and L(ei,j, eu,v) = exp(−dist(fi,j,fu,v)σe)for edges. Here σvandσeare balancing parameters. From this, we can derive the graph-level likelihood τLas:5τL(Gq, Gis) =L(vcq, vcsi)×Yk∈|Vq|argmaxj∈|Vsi|[L(vkq, vj)·L(ec,kq, ec,jsi)] (2)where vcsiis the center node of Gis. The insight behind this formula is to iterate over all possiblenode-level associations and select the one that maximizes the overall likelihood that Gqmatches withGis. Noticeably, we use σvandσeto balance the node-wise and edge-wise likelihood. In practice,we use σv= 1.0andσe= 2.0to make the matching more sensitive to node-level semantics.Jaccard-coefficient & Szymkiewicz–Simpson index In addition to the likelihood index, we alsoconsider other widely used graph similarity indices such as the Jaccard and Szymkiewicz–Simpsonindices. Both indices measure the similarity between two sets.We adopt a similar method as in [7], generating a set S(G)for each graph Gby combining nodesand edges, such that |S(G)|=|V|+|E|. The Jaccard coefficient τJand Szymkiewicz–Simpsonindex τSare then defined as follows:τJ(Gq, Gis) =|S(Gq)∩S(Gis)||S(Gq)|+|S(Gis)| − |S(Gq)∩S(Gis)|, τS(Gq, Gis) =|S(Gq)∩S(Gis)|min(|S(Gq)|,|S(Gis)|)(3)Given that we already know |S(Gq)|and|Gis|, we simply need to compute |S(Gq)∩S(Gis)|, whichconsists of nodes or edges that belong to both GqandGis. We can define this union by applyingdistance thresholds τvandτefor node and edge separately:S(Gq)∩S(Gis) ={(vkq, vπ(k)si)|dist(fkq, fπ(k)si)< εv}+{(ekq, eπ(k)si)|dist(ekq, eπ(k)si)< εe}(4)Here, πis a data association between GqandGis, where π(k) =argminπ(k)(dist(sk, sπ(k))).εvandεeare threshold parameters. The differences between τL,τJ, and τScan be understood as follows:τLdescribes the maximum likelihood among all possible matches between GqandGis. Both τJandτSuse thresholds εv,εeto convert the node and edge matches to binary, and they measure theoverall match rate with different normalization.4 System EvaluationOur OVSG framework experiments addressed these research questions: 1) How does our context-aware grounding method compare to prevailing approaches, including the SOTA semantic methodand the recent work in the landscape of 3D semantic/spatial mapping, ConceptFusion [32] 2) Howwell does OVSG handle open-vocabulary queries? 3) What differences do our graph similarity-based methods show? 4) How well does OVSG perform inside a real robot environment?These questions are imperative as they not only test the robustness of the OVSG framework butalso its comparative efficacy against notable methods like ConceptFusion in the ability to handle theintricacies of context-aware open-vocabulary queries.4.1 Queries, Dataset, Metrics & BaselinesQueries We have two categories of queries for evaluation:•Object-only Queries These queries are devoid of any specific agent or region preference.They are less generic and assess the system’s grounding ability based purely on objects.An example might be: “Can you identify a monitor with a keyboard positioned behind it?”•Whole Queries These queries inherently contain a mix of agent, region, and object prefer-ences. For instance, these queries may include agents and other different entity types. Anexample would be: “Locate the shower jet that Nami loves, with a mirror to its right.”6ScanNet We employed ScanNet’s validation set (312 scenes) for evaluation. Since ScanNet onlyincludes objects, we emulated agents, induced their abstract relationships to objects, captured spatialrelationships between objects, and extracted object features via OVIR-3D before integrating thedataset into our evaluation pipeline. Resource limitations prevented manual labeling of scenes;hence, we synthetically generated 62000 queries (approx.) for evaluation (details in Appendix E.1).DOVE-G We created DOVE-G to support open-vocabulary queries within scenes using naturallanguage. Each scene includes manually labeled ground truth and 50 original natural languagequeries ( Lq). Using LLMs, we expanded this by generating four extra sets of queries, totaling 250queries per scene and 4000 overall to test OVSG’s capabilities with diverse language expressions.ICL-NUIM To thoroughly compare our method, notably with ConceptFusion, we utilized the ICL-NUIM dataset[8]. We have created 359 natural language queries for the ‘Whole Query’ category and190 natural language queries for the ‘Object-only Query’. It should be noted that our approach isnot merely a superficial addition of another dataset; instead, we have adapted and generated naturallanguage queries for each scene within ICL-NUIM, emulating our methodology with DOVE-G. Toadapt it to our framework, we performed similar preprocessing steps as with DOVE-G, importantlymanually labeled ground-truth annotations and leveraging OVIR-3D for feature extraction. Usingthis dataset, we demonstrate the superiority of our proposed method over ConceptFusion, especiallyconcerning complex natural language queries that hinge on multiple relationships as context.Evaluation MetricsFor each query, we evaluated the system’s performance using three distinct metrics:•IoU BBFor each query, this measures the 3D bounding box IoU between the ground truthand the top-k candidates yielded by our system.•IoU 3DFor each query, this measures the IoU between the point cloud indices of the groundtruth instance and the predicted instance.•Grounding Success Rate For each scene, this measures the fraction of queries wherethe system’s predictions accurately match the ground truth given that the overlap issignificant( IoU BB≥0.5 or IoU 3D>0.5). The overlap threshold can be adjusted toalter the strictness of the success criteria.We reported the Top1 and Top3 Grounding Success Rates and average IoU scores for each scene,reflecting the performance of our system in the Top-k results returned for each query.Query Type # Queries MetricAvg. Top1 Scores per Query Avg. Top3 scores per QueryOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)Object-only 18,683IoU BB 0.38 0.15 0.4 0.51 0.52 0.4 0.52 0.55IoU 3D 0.38 0.22 0.44 0.55 - - - -Grounding Success RateBB 38.52 15.29 40.99 52.18 52.95 41.25 53.6 56.25Grounding Success Rate3D 45.13 17.22 47.79 60.35 - - - -Whole 20,173IoU BB 0.37 0.22 0.44 0.55 0.53 0.45 0.55 0.57IoU 3D 0.39 0.16 0.41 0.53 - - - -Grounding Success RateBB 38.56 24.33 47.77 58.85 56.28 47.84 59.87 61.6Grounding Success Rate3D 43.86 24.83 51.22 63.88 - - - -Table 1: Performance of OVSG on ScanNetBaselines We assessed five methods in our study. The SOTA open-vocabulary grounding method,OVIR-3D, is our primary baseline as it will not leverage any inter-notion relations, providing a com-parative measure for the effectiveness of contextual information integration in the other methods.Unlike OVIR-3D, ConceptFusion integrates spatial relationships implicitly. The other three meth-ods, namely OVSG-J, OVSG-S, and OVSG-L (for Jaccard coefficient, Szymkiewicz-Simpson index,and Likelihood, respectively) implement Context-Aware Entity Grounding using different sub-graphmatching techniques, as detailed in Section 3.4.7Query Type # Queries MetricAvg. Top1 Scores per Query Avg. Top3 scores per QueryOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)Object-only 320IoU BB 0.37 0.14 0.39 0.49 0.57 0.36 0.56 0.56IoU 3D 0.41 0.14 0.43 0.54 - - - -Grounding Success RateBB 36.56 13.75 39.06 48.44 58.12 34.06 56.25 56.56Grounding Success Rate3D 49.69 18.44 53.13 67.82 - - - -Whole 400IoU BB 0.35 0.2 0.41 0.51 0.55 0.41 0.55 0.56IoU 3D 0.37 0.21 0.43 0.55 - - - -Grounding Success RateBB 35.5 23.0 44.75 54.25 56.0 41.0 56.75 57.0Grounding Success Rate3D 41.5 25.25 50.25 65.75 - - - -Table 2: Performance of OVSG on DOVE-GQuery Type # Queries MetricAvg. Top1 Scores per Query Avg. Top3 scores per QueryConceptFusion (w/o rel) ConceptFusion OVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)Object-only 190IoU BB - - 0.32 0.18 0.37 0.5 0.55 0.49 0.55 0.56IoU 3D 0.13 (0.3) 0.06 (0.15) 0.37 0.19 0.41 0.56 - - - -Grounding Success RateBB - - 35.26 16.84 38.95 51.6 54.74 48.95 54.74 55.79Grounding Success Rate3D 7.37 (41.18) 2.64 (14.71) 48.95 22.64 51.58 68.95 - - - -Whole 359IoU BB - - 0.33 0.34 0.47 0.61 0.62 0.56 0.64 0.64IoU 3D - - 0.35 0.0.35 0.49 0.64 - - - -Grounding Success RateBB - - 39.28 44.29 59.61 74.09 72.42 66.3 74.09 74.65Grounding Success Rate3D - - 45.97 44.29 61.84 78.56 - - - -Table 3: Performance of OVSG & ConceptFusion on ICL-NUIM4.2 PerformanceScanNet Table 1 averages results across 312 ScanNet scenes. Contextual data greatly improved en-tity grounding, with graph similarity variants (OVSG-S, OVSG-L) surpassing OVIR-3D, especiallyin scenes with repetitive entities like bookstores. More details are in Appendix E.DOVE-G Table 2 averages performance over DOVE-G scenes for five query sets. OVSG-L consis-tently led, further detailed in Appendix F.3. While OVSG-J and OVSG-S were competitive in somescenes, OVSG-L was generally superior. OVIR-3D shined in the Top3 category, especially sinceDOVE-G scenes had fewer repetitive entities. Additional insights in Appendix F.ICL-NUIM Table 3 shows ICL-NUIM results with OVSG-L outperforming other methods, espe-cially in the ‘Whole Query’ segment, contrasting with ScanNet and DOVE-G performances. Con-ceptFusion’s performance was inconsistent across ICL-NUIM scenes (see Appendix G.3), with no-table success in one scene (highlighted in orange in Table 3). Simplified queries improved Con-ceptFusion’s results, as depicted in the ‘ConceptFusion (w/o rel)’ column. Due to its point-levelfusion approach, we evaluated different point thresholds and found optimal results at the Top 1500points. Metrics like IoU BBare not applicable for ConceptFusion. Further details on ICL-NUIMare in Appendix G. Despite ConceptFusion’s strategy to avoid motion-blurred ScanNet scenes [32],its efficacy was still suboptimal in certain clear scenes.Apart from these results, we also provide vocabulary analysis on OVSG as well as two robot exper-iments. Due to space limits, we put them to Appendices C and D.5 Conclusion & LimitationAlthough we have demonstrated the effectiveness of the proposed OVSG in a set of experiments,there still remains three major limitations for our current implementation. First, OVSG heavily relieson an open-vocabulary fusion system like OVIR-3D, which may lead to missed queries if the systemfails to identify an instance. Second, the current language processing system’s strong dependenceon LLMs exposes it to inaccuracies, as any failure in parsing the query language may yield incorrectoutput. Third, as discussed in Section 3.4, calculating graph likelihood by multiplying nodes andedges likelihoods may not be optimal, as likelihoods from distinct types might carry varying levelsof importance and distribution. Accurately balancing these factors remains a challenge for futureresearch, as our efforts with a GNN have not yielded satisfactory results.Despite the aforementioned areas for improvement, we observe that OVSG significantly improvescontext-aware entity grounding compared to existing open-vocabulary semantic methods. SinceOVSG only requires natural language as the query input, we believe it holds great potential forseamless integration into numerous existing robotics systems.8AcknowledgmentsThis work is supported by NSF awards 1846043 and 2132972.References[1] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proc. Computer Vision and Pattern Recog-nition (CVPR), IEEE , 2017.[2] K. He, G. Gkioxari, P. Doll ́ar, and R. Girshick. Mask r-cnn. In Proceedings of the IEEEinternational conference on computer vision , pages 2961–2969, 2017.[3] M. Fisher, M. Savva, and P. Hanrahan. Characterizing structural relationships in scenes usinggraph kernels. The first paper that uses 3d scene-graph.[4] I. Armeni, Z.-Y . He, J. Gwak, A. R. Zamir, M. Fischer, J. Malik, and S. Savarese. 3d scenegraph: A structure for unified semantics, 3d space, and camera. 10 2019. URL http://arxiv.org/abs/1910.02527 .[5] U. H. Kim, J. M. Park, T. J. Song, and J. H. Kim. 3-d scene graph: A sparse and semantic rep-resentation of physical environments for intelligent agents. IEEE Transactions on Cybernetics ,50:4921–4933, 12 2020. ISSN 21682275. doi:10.1109/TCYB.2019.2931042.[6] A. Rosinol, A. Gupta, M. Abate, J. Shi, and L. Carlone. 3d dynamic scene graphs: Actionablespatial perception with places, objects, and humans. 2 2020. URL http://arxiv.org/abs/2002.06289 .[7] J. Wald, H. Dhamo, N. Navab, and F. Tombari. Learning 3d semantic scene graphs from 3dindoor reconstructions. 4 2020. URL http://arxiv.org/abs/2004.03967 .[8] A. Handa, T. Whelan, J. McDonald, and A. J. Davison. A benchmark for rgb-d visual odom-etry, 3d reconstruction and slam. In 2014 IEEE international conference on Robotics andautomation (ICRA) , pages 1524–1531. IEEE, 2014.[9] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervi-sion. In International conference on machine learning , pages 8748–8763. PMLR, 2021.[10] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, Q. Le, Y .-H. Sung, Z. Li, and T. Duerig.Scaling up visual and vision-language representation learning with noisy text supervision. InInternational Conference on Machine Learning , pages 4904–4916. PMLR, 2021.[11] X. Zhai, X. Wang, B. Mustafa, A. Steiner, D. Keysers, A. Kolesnikov, and L. Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 18123–18133, 2022.[12] B. Li, K. Q. Weinberger, S. Belongie, V . Koltun, and R. Ranftl. Language-driven semanticsegmentation. arXiv preprint arXiv:2201.03546 , 2022.[13] G. Ghiasi, X. Gu, Y . Cui, and T.-Y . Lin. Scaling open-vocabulary image segmentation withimage-level labels. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv,Israel, October 23–27, 2022, Proceedings, Part XXXVI , pages 540–557. Springer, 2022.[14] J. Xu, S. De Mello, S. Liu, W. Byeon, T. Breuel, J. Kautz, and X. Wang. Groupvit: Semanticsegmentation emerges from text supervision. arXiv preprint arXiv:2202.11094 , 2022.[15] X. Gu, T.-Y . Lin, W. Kuo, and Y . Cui. Open-vocabulary object detection via vision and lan-guage knowledge distillation. In International Conference on Learning Representations .9[16] X. Zhou, R. Girdhar, A. Joulin, P. Kr ̈ahenb ̈uhl, and I. Misra. Detecting twenty-thousand classesusing image-level supervision. In Computer Vision–ECCV 2022: 17th European Conference,Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IX , pages 350–368. Springer, 2022.[17] M. Minderer, A. Gritsenko, A. Stone, M. Neumann, D. Weissenborn, A. Dosovitskiy, A. Ma-hendran, A. Arnab, M. Dehghani, Z. Shen, et al. Simple open-vocabulary object detection withvision transformers. arxiv 2022. arXiv preprint arXiv:2205.06230 .[18] L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y . Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 10965–10975, 2022.[19] S. Lu, H. Chang, E. P. Jing, A. Boularias, and K. Bekris. Ovir-3d: Open-vocabulary 3d instanceretrieval without training on 3d data. In 7th Annual Conference on Robot Learning , 2023.[20] J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy. Generation andcomprehension of unambiguous object descriptions. In Proceedings of the IEEE conferenceon computer vision and pattern recognition , pages 11–20, 2016.[21] V . K. Nagaraja, V . I. Morariu, and L. S. Davis. Modeling context between objects for refer-ring expression understanding. In Computer Vision–ECCV 2016: 14th European Conference,Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 792–807.Springer, 2016.[22] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring ex-pressions. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, TheNetherlands, October 11-14, 2016, Proceedings, Part II 14 , pages 69–85. Springer, 2016.[23] W. Kuo, F. Bertsch, W. Li, A. Piergiovanni, M. Saffar, and A. Angelova. Findit: Generalizedlocalization with natural language queries. 3 2022. URL http://arxiv.org/abs/2203.17273 .[24] S. Y . Gadre, M. Wortsman, G. Ilharco, L. Schmidt, and S. Song. Cows on pasture: Base-lines and benchmarks for language-driven zero-shot object navigation. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23171–23181,2023.[25] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler.Open-vocabulary queryable scene representations for real world planning. In 2023 IEEE Inter-national Conference on Robotics and Automation (ICRA) , pages 11509–11522. IEEE, 2023.[26] K. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, J. Tenenbaum, C. de Melo, M. Krishna, L. Paull, F. Shkurti, and A. Tor-ralba. Conceptfusion: Open-set multimodal 3d mapping. arXiv , 2023.[27] N. M. M. Shafiullah, C. Paxton, L. Pinto, S. Chintala, and A. Szlam. Clip-fields: Weaklysupervised semantic fields for robotic memory. arXiv preprint arXiv:2210.05663 , 2022.[28] H. Chang, D. M. Ramesh, S. Geng, Y . Gan, and A. Boularias. Mono-star: Mono-camerascene-level tracking and reconstruction. arXiv preprint arXiv:2301.13244 , 2023.[29] H. Chang and A. Boularias. Scene-level tracking and reconstruction without object priors.In2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages3785–3792. IEEE, 2022.[30] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural LanguageProcessing . Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084 .10[31] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation.InProceedings of the 2014 conference on empirical methods in natural language processing(EMNLP) , pages 1532–1543, 2014.[32] K. M. Jatavallabhula, A. Kuwajerwala, Q. Gu, M. Omama, T. Chen, S. Li, G. Iyer, S. Saryazdi,N. Keetha, A. Tewari, et al. Conceptfusion: Open-set multimodal 3d mapping. arXiv preprintarXiv:2302.07241 , 2023.[33] C. Campos, R. Elvira, J. J. G. Rodr ́ıguez, J. M. Montiel, and J. D. Tard ́os. Orb-slam3: Anaccurate open-source library for visual, visual–inertial, and multimap slam. IEEE Transactionson Robotics , 37(6):1874–1890, 2021.11A Prompt Engineering for Query ParseAs Chain of Thoughts (COT) has demonstrated, by providing with a series of detailed examples,we can guide large language model to generate our desired output while maintaining some formatrequirement. The design of these examples are also known as prompt engineering.A.1 Prompt Example IllustrationConsider this natural language query as an example: “Could you point out Zoro’s go-to cup, whichwe usually keep to the right of our espresso machine, on the left of the trash can, and in front of thecoffee kettle?”In this query, the user is asking about the location of a cup, which has three different spatial rela-tionships with other reference entities and one abstract relationship with a user named Zoro.The desired output we provided is shown as below:There are three notions here: zoro, cup, espresso machine, trash can, coffee kettle. I canonly use the relation provided.The query target is cup. The relationship between zoro and cup is like. This relationship isa abstract relationship.The relationship between cup and espresso machine is right to. This relationship is a spatialrelationship.The relationship between cup and trash can is left to. This relationship is a spatialrelationship.The relationship between coffee kettle and cup is behind. This relationship is a spatialrelationship.The notion, target, and relationship are:“‘target @ cup {object}zoro{user}– like [abstract] – cup {object}cup{object}– right to [spatial] – espresso machine {object}cup{object}– left to [spatial] – trash can {object}coffee kettle {object}– behind [spatial] – cup {object}”’This example starts from some reasoning process in natural language, and ends with a structuredoutput which can be parsed by code. A breakdown of the structure is as follows:target @ cup {object }: This line specifies the target object, which is a cup.zoro {user}– like [abstract] – cup {object }: This line represents a relationship between a usernamed Zoro (user) and the cup (an object) that Zoro likes the cup (Zoro’s favorite). In our currentimplementation, like is a relation of type abstract .cup{object }– right to [spatial] – espresso machine {object }: This line represents a spatialrelationship between the cup (an object) and the espresso machine (an object). The cup is positionedto the right of the espresso machine.cup{object }– left to [spatial] – trash can {object }: This line represents a spatial relationshipbetween the cup (an object) and the trash can (an object). The cup is positioned to the left of thetrash can.coffee kettle {object }– behind [spatial] – cup {object}: This line describes a spatial relationshipbetween the coffee kettle (an object) and the cup (an object). The coffee kettle is positioned behindthe cup.12A.2 More prompt examplesBefore asking the LLM to process the real user input, we will first input around 10 examples as aprompt to control the output format. We select a few examples to show here.Question: I want to get the cracker box around the table in the kitchen.There are three notions here: cracker box, table, and kitchen. I can only use the relationprovided.The query target is the cracker box.This is a query for an object of the known category: cracker box.The relationship between the cracker box and the table is ‘near’. This relationship is a spatialrelationship.The relationship between the table and the kitchen is ‘in’. This relationship is a spatialrelationship.The notion, target, and relationship are:“‘target @ cracker box objectcracker box object – near [spatial] – table objecttable object – in [spatial] – kitchen region”’Question: Bring Tom his favorite drink.There are two notions here: Tom and drink. I can only use the relation provided.This is a query for an object of a known category: drink.The relationship between me and drink is ‘like’. This relationship is a spatial relationship.The query target is ‘drink’.The notion, target, and relationship are:“‘target @ drink objectTom user – like [spatial] – drink object”’Question: Can you find Marry’s favourite coffee cup? It might be at the kitchen.There are three notions here: Mary, coffee cup, and kitchen.This is a query for object of known category: coffee cup.The relationship between Mary and coffee cup is like. This relationship is a user relation-ship.The relationship between coffee cup and kitchen is in. This relationship is a spatial relation-ship.The query target is coffee cup.The notion, target, and relationship are:“‘target @ coffee cup objectMary user – like [user] – coffee cup objectcoffee cup object – in [spatial] – kitchen region”’13Figure 3: The illustration highlights the non-linear nature of the spatial language feature. Let’s assume both the spatial pose feature and spatial text feature can berepresented within a singular linear space. For instance, consider A being to the left and in front of B, while C is to the left but behind B. The pose feature for A relativeto B should align closely with the text features “left” and “front”. Conversely, the pose feature for C relative to B should be close to the text feature “left” but distantfrom “front”. If all these features were mapped onto a linear space, the pose feature fpose (A, B )would paradoxically be both near and far from fpose (C, B ).B Spatial Relationship Prediction PipelineFigure 2: Architecture of spatial-language encoder and predictor. The blue block is the spatial pose encoder, and the yellow block is the spatial relationship predictor.The Spatial Relationship Predictor module aims to estimate the likelihood between pose pairs andlanguage descriptions. Given that there is no standard solution to this spatial-language alignmentchallenge, we have developed our own encoder-predictor structure.Network Structure The input for the spatial pose encoder (depicted as a blue blockin Figure 2) is a pose pair defined by (N, 18). An entity’s pose in the OVSGis characterized by the boundaries and center of its bounding box, specifically(xmin, ymin, zmin, xmax, ymax, zmax, xcenter , ycenter , zcenter ). We employ a five-layer MLPto encode this pose pair into a spatial pose feature. For the encoding of the spatial relationshipdescription, we utilize the CLIP-text encoder, converting it into a 512-dimensional vector.Distance Design These encoders serve as the foundation for constructing the OVSG. When per-forming sub-graph matching, the predictor head estimates the distance between the spatial posefeature and the spatial text feature. We do not use cosine distance because the spatial relationshipis highly non-linear. Figure 3 illustrates why cosine distance is not sufficiently discriminative forspatial-language alignment.Training process We train this encoder and predictor module using supervised learning. The train-ing data is generated synthetically. We manually defined 8 different single spatial relationships, i.e.left, right, in front of, behind, in, on, above, under. From these 8 basic spatial relationships, wecan generated more than 20 different meaningful combinations, e.g. “on the right side”, “at the leftfront part”. Each combinations can also have more than one descriptions. Finally, we collected 90descriptions in total. The training loss we used is a binary cross entropy loss.14C Robot applicationManipulation In order to exemplify the utility of OVSG in real-world manipulation scenarios, wedevised a complex pick-and-place experiment. In this task, the robot is instructed to select onebuilding block and position it on another. The complexity of the task stems from the multitudeof blocks that are identical in both shape and color, necessitating the use of spatial context fordifferentiation. Each task consists of a picking action and a placing action. We formulated ninedistinct tasks for this purpose (please refer to Appendix C.1 for detailed setup). The effectiveness ofthe manipulation task was evaluated by comparing the success rate achieved by OVIR-3D and ournewly proposed OVSG-L. The outcome of this comparative study is depicted in the accompanyingtable. The results demonstrate that our innovative OVSG-L model significantly enhances the objectgrounding accuracy in manipulation tasks involving a high prevalence of identical objects. Thisimprovement highlights the potential of OVSG-L in complex manipulation scenarios, paving theway for further exploration in the field of robotics.object shoe bottle chair trash can#1 trash can#2 drawer clothsuccess rate (%) 100.0 100.0 100.0 100.0 100.0 100.0 0.0Table 4: Success rate of object navigation taskMethod scene1 scene2 scene3OVIR-3D (%) 0.0 0.75 0.33OVSG (%) 0.88 0.75 0.75Table 5: Success rate of manipulation taskNavigation We conducted a system test on a ROSMASTER R2 Ackermann Steering Robot for anobject navigation task. The detailed setup can be found in Appendix C.2. We provided queries forseven different objects within a lab scene, using three slightly different languages to specify eachobject. These queries were then inputted into OVSG, and the grounded positions of the entities werereturned to the robot. We considered the task successful if the robot’s final position was within 1meter of the queried objects. The results are presented in Table 4. From the table, it is evidentthat the proposed method successfully located the majority of user queries. However, there was onequery that was not successfully located: “The cloth on a chair in the office.” In this case, we foundthat OVIR-3D incorrectly recognized the cloth as part of a chair, resulting in the failure to locate it.C.1 Manipulation Experiment SetupRobot Setup All evaluations were conducted using a Kuka IIWA 14 robot arm equipped with aRobotiq 3-finger adaptive gripper. The arm was augmented with an Intel Realsense D435 camera,which was utilized to capture the depth and color information of the scene in an RGB-D format,offering a resolution of 1280 x 720. The gripper operated in “Pinch Mode,” whereby the two fingerson the same side of the gripper bent inward.To initiate the process, the robot arm was employed to position the camera above the table, orientingit in a downward direction. Subsequently, the RGB-D data, along with a query specifying the objectto be picked and a target object for placement, were inputted into the OVSG system. Upon acquiringthe bounding box of the query object, the robot gripper was directed to move towards the centercoordinates of the target box by utilizing the ROS interface of the robot arm.Block building task To evaluate the application of the proposed method in real-world manipulationtasks, we designed a block-building task. The task is to pick one building block from a set of buildingblocks and place it on another building block. The picking block and placing block are separatelyspecified by a different natural language query. The difficulty of this task is that each building blockhas many repeats around it so we have to use spatial context to specify the building block. And weneed to succeed twice in a row to complete a task.15Object Query Id Queryshoe 1 A shoe that is in front of the monitor and in the office.2 A shoe positioned both before the monitor and within the office.3 A shoe rests in front of the monitor. The shoe is inside the office.bottle 4 The bottle that is right to Tom’s keyboard5 The bottle to the right of Tom’s keyboard.6 The bottle positioned to the right of a keyboard which belongs to Tom.chair#1 7 The chair that is behind the TV .8 The chair is situated behind the TV9 The chair with a TV in front of it.chair#2 10 The chair that is in front of the car.11 The chair rests before a car.12 The chair with a car behind it.trash can#1 13 The trash can that is behind the refrigerator.14 The trash can which can be found behind the refrigerator.15 The trash can that is situated at the rear of the refrigerator.trash can#2 16 The trash can that is in the lab and under a table.17 The trash can is located in the lab and beneath a table.18 The trash can is situated within the lab, positioned under a table.drawer 19 The drawer that is behind a box in the office.20 The drawer is positioned behind a box in the office.21 The drawer is situated at the rear of a box within the office.cloth 22 The cloth that is on a chair23 The cloth is resting on a chair24 The cloth is positioned atop a chair.Table 6: Queries for navigation taskFigure 4: The left one is the robot for our navigation task, ROSMASTER R2 Ackermann Steering Robot. The right one is the robot for our manipulation task, KUKAIIWA 14C.2 Navigation Experiment SetupRobot Setup All evaluations were conducted using a ROSMASTER R2 Ackermann Steering Robot.For perception, we utilized an Astra Pro Plus Depth Camera and a YDLidar TG 2D lidar sensor, bothmounted directly onto the robot. The robot is equipped with a built-in Inertial Measurement Unit(IMU) and wheel encoder. The Astra camera provides a video stream at a resolution of 720p at 30frames per second, and the lidar operates with a sampling frequency of 2000 Hz and a scanningradius of approximately 30 meters. The overall configuration of the setup is depicted in Figure 4.Demonstrations and Execution Prior to the evaluation process, we employed an Intel RealSenseD455 camera and ORB-SLAM3 [33] to generate a comprehensive map of the environment. Thisgenerated both the RGB-D and pose data, which could be subsequently fed into the Open-vocabulary16pipeline. For the demonstration of locating with the Open-V ocabulary 3D Scene Graph (OVSG), wedeveloped a 3D to 2D conversion tool. This tool takes the point cloud from the comprehensive3D map and converts it into a 2D map by selecting a layer of points at the height of the lidar.The resultant 2D map could then be utilized by the ROSMASTER R2 Ackermann Steering robotfor navigation. To achieve goal-oriented navigation, we incorporated the Robot Operating System(ROS) Navigation stack and integrated it with the Timed Elastic Band (TEB) planner. The initialstep involved establishing a pose within the environment. Subsequently, the Adaptive Monte CarloLocalization (AMCL) leveraged lidar scan inputs and IMU data to provide a robust estimate of therobot’s pose within the map. The move base node, a key component of the ROS navigation stack,used the converted map and the item’s position provided by the OVSG and conversion tool to for-mulate a comprehensive global plan targeting the goal position. Concurrently, the TEB local plannerconsolidated information about ROSMASTER R2’s kinematics and lidar input to generate a short-term trajectory. The result was a locally optimized, time-efficient plan that adhered to the robot’spre-set velocity and acceleration limits. The plan also included obstacle avoidance capabilities,enabling the robot to identify and circumvent barriers detected by the lidar system.Object navigation task To evaluate the application of OVSG in real-world navigation problems,a language-based object navigation task is proposed. We selected seven different objects inside alaboratory. Each object is paired with three different queries. All queries for three objects are listedin Table 6.V ocab SetTop1 Grounding Success RateBB Top3 Grounding Success RateBBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)#1 35.14 23.71 42.29 47.14 53.14 40.86 55.71 57.14#2 35.71 27.43 45.43 51.71 58.86 46.86 62.57 62.57#3 32.29 26.00 43.43 48.86 58.86 47.71 60.57 62.00#4 33.14 20.29 42.29 44.86 55.71 42.86 57.14 56.57#5 42.86 29.71 52.57 57.43 62.57 52.29 65.71 65.43Overall 35.83 25.43 45.20 50.00 57.83 46.12 60.34 60.74Top1IoU BB Top3IoU BBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)#1 0.31 0.18 0.36 0.42 0.50 0.39 0.51 0.53#2 0.31 0.22 0.38 0.44 0.53 0.43 0.55 0.56#3 0.29 0.21 0.37 0.43 0.52 0.43 0.54 0.56#4 0.30 0.17 0.36 0.40 0.51 0.39 0.52 0.52#5 0.38 0.24 0.44 0.50 0.57 0.47 0.59 0.60Overall 0.32 0.20 0.38 0.44 0.53 0.42 0.54 0.56Table 7: Performance comparison against five different varied open-vocabulary setsRelationship SetTop1 Grounding Success RateBB Top3 Grounding Success RateBBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)#1 35.14 19.50 36.00 41.38 54.12 40.88 53.88 58.12#2 35.71 21.50 36.75 44.62 59.12 47.62 62.38 62.88#3 32.29 23.25 38.25 42.12 58.88 49.88 61.12 59.88#4 33.14 18.75 36.50 40.38 56.38 43.12 55.38 56.38#5 42.86 22.75 43.25 47.88 62.38 52.38 65.12 65.38Overall 35.83 21.15 38.15 43.28 58.18 46.76 59.58 60.53Top1IoU BB Top3IoU BBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)#1 0.30 0.17 0.31 0.36 0.50 0.38 0.48 0.53#2 0.30 0.19 0.32 0.39 0.54 0.43 0.55 0.56#3 0.29 0.20 0.33 0.38 0.53 0.45 0.54 0.54#4 0.30 0.16 0.30 0.36 0.52 0.40 0.50 0.52#5 0.38 0.20 0.37 0.42 0.57 0.48 0.58 0.60Overall 0.32 0.18 0.33 0.38 0.53 0.43 0.53 0.55Table 8: Performance comparison against five different varied relationshipsetsD Open-Vocabulary AnalysisHaving presented insights on our system’s performance on natural language queries for DOVE-G (asshown in Table 2), we proceed to deepen our investigation into the system’s resilience across diversequery sets. To accomplish this, we instead average the results from all scenes for each of the fivevocabulary sets (refer to Table 7). By doing so, we aim to provide a robust evaluation of our system’sperformance across a variety of query structures and word choices, simulating the varied ways inwhich users may interact with our system. In addition to experimenting with object vocabularyvariations (a ‘coffee maker’ to ‘espresso machine’ or ‘coffee brewer’), and altering the order ofentity referencing in the query, we also studied the impact of changing relationship vocabulary.In this experimental setup, the LLM is not bound to map relationships to a pre-determined set asbefore. Instead, the graph-based query contains a variety of relationship vocabulary. To illustrate,consider the queries “A is to the left back corner of B” and “A is behind and left to B”. Previously,these relationships would map to a fixed relation like ‘left and behind’. Now, ‘front and left’ asinterpreted by the LLM can variate to ‘leftward and ahead’, ‘northwest direction’, or ‘towards thefront and left’, offering a broader range of relationship descriptions. The evaluation results for thesequery sets are presented in Table 8.Varying object names Across all evaluated vocabulary sets, OVSG-L demonstrates the highestTop1 and Top3 Grounding Success RatesBB, outperforming the remaining methods. This patternalso persists for scores in the IoU BBcategory. Notably, OVSG-L’s Grounding Success Rates spanfrom 44.86% to 57.43% for Top1, and 56.57% to 65.43% for Top3. All in all, contextual under-standing of the target again proves to improve results from 35.83% (OVIR-3D) to 50% (OVSG-L)for Top1 Grounding Success RateBBand 0.32 to 0.44 for the Top1 IoU BB.17Varying relationships As shown in Table 8, we observe a noticeable decrease in performance forthe methods under the OVSG framework (compared to Table 7). This is likely due to the increasedcomplexity introduced by the varied word choices for edges (relationships) in the sub-graph beingmatched. Despite this, two of the OVSG methods still outperform the OVIR-3D method, with theOVSG-L method delivering the strongest results.E More on ScanNetE.1 Synthetic Query Generation for ScanNetIn the ScanNet dataset, each scene comes with ground-truth labels for its segmented instances orobjects. We began by calculating the spatial relationships between these ground-truth objects orentities. Subsequently, agents were instantiated into the scene, and abstract relationships were ran-domly established between the agents and the entities present in the scene. After generating theOVSG for each scene, our next step involved the creation of graph-based queries (refer to syntaxand details in Appendix A) for evaluation purposes. For each of these queries, we randomly selectedreference entities from the OVSG that shared a relationship with the target entity. This formed thebasis of the synthetic generation of the graph-based queries for the ScanNet dataset.E.2 Grounding Success RateBBFigure 5: Performance of OVSG w.r.t Grounding Success RateBB on ScanNet ScenesIn this section, we provide the number of ScanNet scenes that correspond to various success ratethresholds (at 15%, 25%, 50%, and 75%). We provide four-fold results containing Top1 and Top3scores for ‘Object-only’ and ‘Whole Query’ categories (as shown in Figure 5).E.3 Grounding Success Rate3DIn this section, we provide the various success rates for different IoU 3Dthresholds (at 0.15, 0.25,0.5, and 0.75). We provide two-fold results containing scores for ‘Object-only’ and ‘Whole Query’categories (as shown in Figure 6).18Figure 6: Performance of OVSG w.r.t Grounding Success Rate3D on ScanNet QueriesF More on DOVE-GF.1 Grounding Success RateBBFigure 7: Performance of OVSG w.r.t Grounding Success RateBB on DOVE-G ScenesIn this section, we provide the number of DOVE-G scenes that correspond to various success ratethresholds (at 15%, 25%, 50%, and 75%). We provide four-fold results containing Top1 and Top3scores for ‘Object-only’ and ‘Whole Query’ categories (as shown in Figure 7).F.2 Grounding Success Rate3DIn this section, we provide the various success rates for different IoU 3Dthresholds (at 0.15, 0.25,0.5, and 0.75). We provide two-fold results containing scores for ‘Object-only’ and ‘Whole Query’categories (as shown in Figure 8).19Figure 8: Performance of OVSG w.r.t Grounding Success Rate3D on DOVE-G QueriesF.3 Performance of the OVSG Framework on Various Scenes in DOVE-GSceneTop1 Grounding Success RateBB Top3 Grounding Success RateBBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)Room #1 44.4 36.8 61.2 79.6 88.4 72.8 98.4 99.6Kitchenette 31.0 24.0 36.0 49.0 65.0 51.0 71.0 71.0Bathroom 44.8 23.2 57.2 58.0 59.6 41.2 66.0 69.6Kitchen 44.8 36.0 51.2 55.6 53.6 50.8 55.2 56.8Room #2 18.0 26.0 40.0 41.6 62.0 48.0 65.2 62.8Computer Lab 38.0 30.8 38.0 41.6 54 41.2 50.4 49.2Room #3 32.8 16.8 40.4 40.4 47.2 34.4 47.2 47.2Hallway 28.0 8.4 28.4 33.2 40.0 34.4 40.0 40.0Overall 35.2 25.2 45.4 51.1 63.6 47.1 64.8 65.7Top1IoU BB Top3IoU BBOVIR-3D OVSG-J OVSG-S OVSG-L (Ours) OVIR-3D OVSG-J OVSG-S OVSG-L (Ours)Room #1 0.26 0.18 0.36 0.47 0.55 0.39 0.60 0.61Kitchenette 0.28 0.23 0.34 0.42 0.59 0.47 0.62 0.62Bathroom 0.30 0.13 0.36 0.36 0.41 0.30 0.44 0.46Kitchen 0.35 0.28 0.38 0.42 0.47 0.42 0.46 0.48Room #2 0.15 0.21 0.31 0.34 0.51 0.41 0.53 0.52Computer Lab 0.32 0.22 0.33 0.39 0.49 0.39 0.45 0.46Room #3 0.34 0.13 0.37 0.39 0.50 0.36 0.49 0.50Hallway 0.27 0.10 0.24 0.31 0.37 0.34 0.38 0.41Overall 0.28 0.18 0.34 0.39 0.49 0.39 0.49 0.50Table 9: Performance of the OVSG framework on natural language scene queries in DOVE-GIn Table 9, we present the performance of our OVSG framework on natural language scene queriesin DOVE-G.F.4 50 Sample Natural Language Queries for Scenes in DOVE-GIn Table 10, we provide a list of 50 sample queries for scenes in DOVE-G.F.5 More on Scenes in DOVE-GIn Figure 9 and Figure 10, we display eight different scenes included in our DOVE-G dataset.20Query No. Natural Language Query1 Locate the vanity sink, which is positioned to the right side of a door latch.2 Identify the hand basin that has a face cleanser situated in front of it.3 Is there a hand basin that has both a facial scrub and hand soap placed in front of it?4 I’m looking for a wash-hand basin with a facial scrub directly in front of it and a door latch toits right.5 Locate the shower jet that Nami loves, with a mirrored surface to its right and a hair cleanser infront of it.6 Can you find the shower sprayer with a face cleanser positioned behind it?7 Search for the shower sprayer that has a facial scrub behind it and a vanity mirror to its right.8 Identify the travel suitcase located to the right of a backpack and ahead of a water bottle.9 Look for a travel suitcase Zoro dislikes, it should be to the right of a book and a water bottle.10 Can you find a book positioned to the left of a backpack?11 Nami’s preferred book should be positioned in front of a chair, can you find it?12 Can you identify Zoro’s liked book that’s situated ahead of a water bottle and another book?13 Locate a Carry-on Luggage for me, please. It should have both a water bottle and a rucksack infront.14 Is there a trolley bag with a water bottle and a backpack up front, and also a desk chair in itsrear?15 Find an ergonomic chair for me, but it has to have a textbook situated to its left.16 Can you find the workbook that Luffy dislikes and is right of a desk chair?17 Is there a headrest that has a reference book positioned in front of it?18 Where’s the cushion with a coursebook and an ergonomic chair up front?19 I’m searching for a backpack with a coursebook on its left.20 Where’s the table fan with a desk chair behind it?21 Can you find a table fan with a computer chair and a reference book behind it?22 Where’s the reading lamp that’s to the left of a travel bag?23 Can you spot the reading lamp that’s to the left of a travel bag and behind a computer chair?24 Is there a reading lamp that’s behind a reference book?25 Can you find a pedestal fan with a desk chair and a coursebook behind it?26 Locate the headrest that’s disliked by Nami, but Luffy is indifferent to.27 How about a cushion that Nami dislikes, Luffy is neutral to, and Zoro takes a liking to?28 Where’s the reading lamp that’s to the left of a knapsack?29 Can you spot the reading lamp that’s to the left of a knapsack and behind an ergonomic chair?30 Is there a desk lamp that’s behind a workbook?31 Where’s the reading lamp that Nami is fond of?32 Can you locate a backpack with a table fan to its left?33 Where’s the knapsack with a tower fan on its left and Luffy behind?34 Can you find a travel bag that has a water bottle on its left?35 I’m searching for a backpack with a textbook on its left.36 Where’s the pedestal fan with a computer chair behind it?37 Where is the cup that’s nestled to the right of the coffee maker, to the left of the coffee kettle,and in front of the poster?38 Identify the toy that’s to the right of the espresso machine and to the left of the trash can.39 Where’s the doll with a cup positioned behind it?40 Can you show me the water bottle that Luffy loves and has a coffee cup behind it?41 Locate the water bottle that’s to the left of the checkerboard.42 I want to know about the coffee cup that Zoro loves, which is also behind the keyboard thatNami is behind.43 Can you identify the chair that the CPU machine is behind?44 Locate the chair that Nami likes and is also behind the CPU machine.45 Can you identify the teacup that Luffy loves and is behind the CPU machine?46 Is there a computer chair that Zoro doesn’t prefer?47 Identify a coursebook located ahead of a water bottle.48 Is there a workbook sandwiched between two water sipper bottles?49 Nami’s preferred coursebook should be positioned in front of a desk chair, can you find it?50 Zoro’s liked reading book, is it placed in front of a water beverage bottle?Table 10: List of 50 sample queries for scenes in DOVE-G21Figure 9: Four of the scenes in DOVE-GG More on ICL-NUIMG.1 Grounding Success RateBBIn this section, we provide the number of ICL-NUIM scenes that correspond to various success ratethresholds (at 15%, 25%, 50%, and 75%). We provide four-fold results containing Top1 and Top3scores for ‘Object-only’ and ‘Whole Query’ categories (as shown in Figure 11).G.2 Grounding Success Rate3Din comparison to ConceptFusionIn this section, we provide the various success rates for different IoU 3Dthresholds (at 0.15, 0.25,0.5, and 0.75). We provide two-fold results containing scores for ‘Object-only’ and ‘Whole Query’categories (as shown in Figure 12).G.3 Scene by Scene Grounding Success Rate3Dof OVSG & ConceptFusion on ICL-NUIMTable 11 showcases the 3D Grounding Success Rate of various methods on different scenes in theICL-NUIM dataset, highlighting the performance metrics across different IoU 3Dthresholds.G.4 Qualitative Performance Comparison between ConceptFusion and OVSG-LIn this section, we are providing qualitative results on sample queries for the methods ConcepFusionand OVSG-L in Figure 13 and Figure 14 respectively.22Figure 10: Four of the other scenes in DOVE-GFigure 11: Performance of OVSG w.r.t Grounding Success RateBB on ICL-NUIM Scenes23Figure 12: Performance of OVSG & ConceptFusion w.r.t Grounding Success Rate3D on ICL-NUIM QueriesICL-NUIM Scene # Queries MethodGrounding Success Rate3DIoU 3D>0.15 IoU 3D>0.25 IoU 3D>0.50 IoU 3D>0.75living room traj0 freipng 18ConceptFusion (w/o rel) 88.89 88.89 0 0ConceptFusion 16.67 5.56 0 0OVIR-3D 83.34 83.34 50 22.23OVSG-J 5.56 5.56 5.56 0OVSG-S 94.45 94.45 61.12 22.23OVSG-L (Ours) 100 100 66.67 22.23living room traj1 freipng 34ConceptFusion (w/o rel) 61.77 50 41.18 0ConceptFusion 26.48 26.48 14.71 0OVIR-3D 70.59 70.59 67.65 0OVSG-J 38.24 38.24 29.42 0OVSG-S 58.83 58.83 55.89 11.77OVSG-L (Ours) 79.42 79.42 70.59 14.71living room traj2 freipng 28ConceptFusion(w/o rel) 46.43 14.29 0 0ConceptFusion 3.58 0 0 0OVIR-3D 50 50 50 28.58OVSG-J 42.86 35.72 3.58 0OVSG-S 82.15 82.15 50 28.58OVSG-L (Ours) 92.86 92.86 53.58 28.58living room traj3 freipng 17ConceptFusion(w/o rel) 0 0 0 0ConceptFusion 0 0 0 0OVIR-3D 11.77 11.77 0 0OVSG-J 23.53 23.53 23.53 0OVSG-S 41.18 23.53 11.77 11.77OVSG-L (Ours) 82.36 64.71 52.95 29.42office room traj0 freipng 29ConceptFusion(w/o rel) 0 0 0 0ConceptFusion 0 0 0 0OVIR-3D 65.52 65.52 65.52 0OVSG-J 44.83 44.83 41.38 0OVSG-S 65.52 65.52 65.52 0OVSG-L (Ours) 100 100 96.56 0office room traj1 freipng 19ConceptFusion(w/o rel) 0 0 0 0ConceptFusion 0 0 0 0OVIR-3D 68.43 68.43 68.43 31.58OVSG-J 42.11 42.11 42.11 21.06OVSG-S 73.69 73.69 73.69 36.85OVSG-L (Ours) 100 100 94.74 57.9office room traj2 freipng 12ConceptFusion(w/o rel) 0 0 0 0ConceptFusion 0 0 0 0OVIR-3D 83.34 83.34 33.34 8.34OVSG-J 0 0 0 0OVSG-S 83.34 83.34 33.34 8.34OVSG-L (Ours) 100 100 41.67 16.67office room traj3 freipng 25ConceptFusion(w/o rel) 12 0 0 0ConceptFusion 0 0 0 0OVIR-3D 44 44 44 0OVSG-J 48 48 28 8OVSG-S 60 60 60 16OVSG-L (Ours) 100 100 80 32Table 11: Grounding Success Rate3D of OVSG & ConceptFusion on ICL-NUIM24Figure 13: Performance of ConceptFusion on sample ICL-NUIM QueriesFigure 14: Performance of OVSG-L (Our method) on sample ICL-NUIM Queries25 |
TWgoGdubPN | Enabling Efficient, Reliable Real-WorldReinforcement Learning with ApproximatePhysics-Based ModelsTyler WestenbroekOden InstituteUniversity of Texas at Austinwestenbroekt@gmail.comJacob LevyAerospace EngineeringUniversity of Texas at Austinjake.levy@utexas.eduDavid Fridovich-KeilAerospace EngineeringUniversity of Texas at Austindfk@utexas.eduAbstract: We focus on developing efficient and reliable policy optimizationstrategies for robot learning with real-world data. In recent years, policy gradi-ent methods have emerged as a promising paradigm for training control policiesin simulation. However, these approaches often remain too data inefficient or un-reliable to train on real robotic hardware. In this paper we introduce a novel pol-icy gradient-based policy optimization framework which systematically leveragesa (possibly highly simplified) first-principles model and enables learning precisecontrol policies with limited amounts of real-world data. Our approach 1)usesthe derivatives of the model to produce sample-efficient estimates of the policygradient and 2)uses the model to design a low-level tracking controller, which isembedded in the policy class. Theoretical analysis provides insight into how thepresence of this feedback controller addresses overcomes key limitations of stand-alone policy gradient methods, while hardware experiments with a small car andquadruped demonstrate that our approach can learn precise control strategies re-liably and with only minutes of real-world data. Code is available at https://github.com/CLeARoboticsLab/LearningWithSimpleModels.jlFigure 1: (Left) Schematic of the proposed policy structure, the crucial element of which is a low-level sta-bilizing controller which improves the smoothness properties of the underlying problem, improving learning.(Middle) Still frames depicting the approximate paths taken by a car and quadruped during test-time. (Overlaid)Top-down view of the car executing two laps of around a figure-8 before and after training.1 IntroductionReliable, high-performance robot decision making revolves around the robot’s ability to learn acontrol policy which effectively leverages complex real-world dynamics over long time-horizons.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.This presents a challenge, as constructing a highly accurate physics-based model for the systemusing first-principles is often impractical. In recent years, reinforcement learning methods builtaround policy gradient estimators have emerged as a promising general paradigm for learning aneffective policy using data collected from the system. However, in current practice these approachesare often too data-inefficient or unreliable to train with real hardware data, leading many approachesto train on high-fidelity simulation environments [1, 2, 3]. However, there inevitably exists a gapbetween simulated and physical reality, leaving room to improve policy performance in the realworld. In this paper, we demonstrate how to systematically leverage a physics-based model to yieldhighly efficient and reliable policy optimization techniques capable of learning with real-world data.Modern techniques for policy learning generally fall into two categories: model-free [4, 5, 6, 7] andmodel-based [8, 9, 10, 11, 12]. Model-free approaches learn a mapping from states to inputs directlyfrom data. These approaches are fully general and can synthesize high-performance policies, butare extremely data-inefficient. In contrast, model-based approaches use the collected data to fit apredictive model to estimate how the system will behave at points not contained in the trainingset. While these approaches are more data-efficient, model inaccuracies introduce bias into policygradient estimators [13, 14], limiting the performance of the learned policy.However, due to the unstable nature of many robotic systems, both of these paradigms suffer from amore fundamental challenge: minute changes to the control policy can greatly impact performanceover long time horizons. This “exploding gradients” phenomenon [15], [16], [17] leads the varianceof policy gradient algorithms to grow exponentially with the time-horizon and renders the underlyingpolicy learning problem ill conditioned, making gradient-based methods slow to converge [18].Moreover, model bias also compounds rapidly over time, limiting the effectiveness of otherwiseefficient model-based approaches [13].As shown in Fig. 1, this paper systematically exploits an approximate physics-based model and low-level feedback control to overcome these challenges in policy learning. Concretely, the contributionsare:• We introduce a novel framework which uses the approximate model to simultaneouslydesign 1)a policy gradient estimator and 2)low-level tracking controllers which we thenembed into the learned policy class. Using the model to construct the gradient estimatorremoves the need to learn about the real-world dynamics from scratch, while the low-levelfeedback controller prevents gradient estimation error from “exploding”.• Theoretical analysis and illustrative examples demonstrate how we overcome exponentialdependencies in the model-bias, variance and smoothness of policy gradient estimators.• We validate our theoretical findings with a variety of simulated and physical experiments,ultimately demonstrating our method’s data efficiency, run-time performance, and mostimportantly, ability to overcome substantial model mismatch. Overall, this paper suggestsa new holistic paradigm for rapidly fine-tuning controllers using real-world data.2 Related WorkBroadly speaking, there are two possible sources of bias when using a model for policy gradientestimation. The first source of error can arise if the model is used to simulate or ‘hallucinate’trajectories for the system which are then added to the data set [13, 19, 20, 21]. While this approachyields a larger training set, it also introduces bias as the trajectories generated by the model canrapidly diverge from the corresponding real-world trajectory. To overcome this source of error, anumber of works [14, 22, 23] have proposed policy gradient estimators which 1)collect real-worldtrajectories and 2)use the derivatives of a (possibly learned) model to propagate approximate policygradient information along these trajectories. We adopt this form of estimator in this work, and notestrong connections to the updates used in the Iterative Learning Control literature [24].Evaluating the gradient along real trajectories removes the first source of error. However, inaccu-racies in the derivatives of the model lead to a second source of error and, as we demonstrate in2Section 5, these errors can grow exponentially over long time horizons for unstable robotic sys-tems. Moreover, prior works have demonstrated that exploding gradients lead to a large variancefor policy gradient estimators and ill-conditioning in the underlying policy optimization problem[15], [16], [17]. We demonstrate how low-level feedback control can overcome this second sourceof error, while reducing variance and improving conditioning. While the use of hiearchical controlarchitectures with embedded low-level feedback has been a key ingredient in many sim-to-real rein-forcement learning frameworks [25], [26], [1], we argue that the combination of the aforementionedpieces opens the door for a new real-world training paradigm that fully leverages our approximatephysics-based models.3 Problem FormulationWe assume access to a simplified, physics-based model of the environment dynamics of the form:xt+1=ˆF(xt, ut), (1)where xt∈ X ⊂ Rnis the state ,ut∈ U ⊂ Rmis the input and the (potentially nonlinear) mapˆF:X × U → X determines how the state evolves over discrete time steps t∈N. To make themodelling process and down-stream controller synthesis tractable, such models are necessarily builton simplifying assumptions. For example, the model we use to control the RC car in Fig. 1 neglectsphysical quantities such as drag and motor time-delays. Nonetheless, such models capture the basicstructure of how controller inputs will affect desired quantities (such as position) over time, and arehighly useful for designing effective control architectures.Although many reinforcement learning frameworks model the environment as a stochastic process,to aid in our analysis, we will assume that the real-world dynamics evolve deterministically, accord-ing to the (possibly nonlinear) relation:xt+1=F(xt, ut). (2)To control the real-world system, we will optimize over a controller architecture of the form ut=πθt(xt)where πθ={πθt}T−1t=0represents the overall policy, T <∞is the finite horizon for the taskwe wish to solve, θ∈Θ⊆Rpis the policy parameter, and each map πθt:X → U is assumed to bedifferentiable in both xandθ. Thus equipped, we pose the following policy optimization problem:maxθ∈ΘJ(θ) :=Ex0∼D[JT(θ;x0)]where JT(θ;x0) :=TXt=0R(xt). (3)Here, Dis the probability density of the initial state x0andRis the (differentiable) reward function.4 Approximating the Policy Gradient with an Imprecise Dynamics ModelIn this section we demonstrate how to calculate the policy gradient by differentiating the real-worlddynamics map Falong trajectories generated by the current policy. We then introduce the estimatorused in this paper, which replaces the derivatives of Fwith the derivatives of the first-principlesmodel ˆF. We will initially focus on the gradient ∇JT(θ;x0)of the reward experienced when un-rolling the policy from a single initial conditon x0∈ X , and then discuss how to approximate thetotal policy gradient ∇J(θ)using a batch estimator. To ease notation, for each x0∈ X andθ∈θwe capture the resulting real-world trajectory generated by πθvia the sequence of maps defined by:φθt+1(x0) =Fφθt(x0), πθt(φθt(x0)), φθ0(x0) =x0.Structure of the True Policy Gradient : Fix an initial condition x0∈ D and policy parameterθ∈Θ. We let {xt}Tt=0and{ut}T−1t=0(with xt=φθt(x0)andut=πθt(xt)) denote the correspond-ing sequences of states and inputs generated by the policy πθ. The policy gradient captures howchanges to the controller parameters will affect the resulting trajectory and the accumulation of fu-ture rewards. The following shorthand captures the closed-loop sensitivity of the state and input tochanges in the policy parameters:∂xt∂θ:=∂∂θφθt(x0),∂ut∂θ:=∂∂θπθt(φθt(x0))These terms depend onthe derivatives of the dynamics, which we denote with:At=∂∂xF(xt, ut), B t=∂∂uF(xt, ut), K t=∂∂xπθt(xt;x0). (4)3Proposition 1. The policy gradient is given by the following expression:∇JT(θ;x0) =TXt=0∇R(xt)·∂xt∂θ,where (5)∂xt∂θ=t−1Xt′=0Φt,t′Bt′∂πθt∂θ,Φt,t′:=t−1Ys=t′+1Aclt, andAclt=At+BtKt.For proof of the result see the supplementary material. The first expression in (5) calculates thegradient in terms of the sensitivities∂xt∂θ, while the latter expressions demonstrate how to computethis term using the derivatives of the model and policy. In (5) the term Φt,t′Bt′captures how aperturbation to the policy at time t′and state xt′propagates through the closed-loop dynamics toaffect the future state at time t > t′. As we investigate below, when the robotic system is unstablethese terms can grow exponentially large over long time horizons, leading to the exploding gradientsphenomenon and the core algorithmic challenges we seek to overcome.Approximating the Policy Gradient Using the Model: We approximate the policy gradient∇θJT(θ;x0)using the approximate physics-based model ˆFin (1). Holding x0∈ X ,θ∈Θ,and the resulting real-world trajectory {xt}Tt=0,{ut}T−1t=0fixed as above, we denote the derivativesof the model along this trajectory as:ˆAt=∂∂xˆF(xt, ut), ˆBt=∂∂uˆF(xt, ut). (6)We can then construct an estimate for ∇JT(θ;x0)of the form:\∇θJT(θ;x0) =TXt=0∇Rt(xt)·d∂xt∂θ,where (7)d∂xt∂θ=t−1Xt′=0ˆΦt,t′ˆBt′∂πθt∂θ,ˆΦt,t′:=t−1Ys=t′+1ˆAcls, andˆAclt=ˆAt+ˆBtKt.Remark 1. Note that this estimator can be evaluated by 1)recording the real-world trajectory whicharises when policy πθis applied starting from initial state x0, and then 2)using the derivatives of themodel ˆFto approximate the derivatives of the real-world system along that trajectory. Effectively,the only approximation here is of the form Φt,t′Bt′≈ˆΦt,t′ˆBt′when calculating the estimate of thesystem sensitivity∂xt∂θ≈d∂xt∂θ. In Sections 5 and 6, we study what causes this approximation to breakdown over long time horizons, and how properly-structured feedback controllers can help.Remark 2. While the policy gradient approximation given by (7)will prove convenient for analysis,this formula requires numerous ‘forwards passes’ to propagate derivatives forwards in time alongthe trajectory. As we demonstrate in the supplementary material, in practice this approximation canbe computed more efficiently by ‘back-propagating through time’.Batch Estimation: To approximate the gradient of the overall objective ∇J(θ), we draw Ninitialconditions {xi0}Ni=1independently from the initial state distribution D, compute each approximategradient\∇JT(θ;xi0)as in (7), and finally compute:∇J(θ)≈ˆgNT(θ;{xi0}Tt=0) :=1NNXi=1\∇JT(θ;xi0). (8)We use this estimator in our overall policy gradient algorithm, which is outlined in Algorithm 1.5 Exploding Gradients: Key Challenges for Unstable Robotic SystemsWe now dig deeper into the structure of the policy gradient and our model-based approximation. Werepeatedly appeal to the following scalar linear system to illustrate how key challenges arise:Running Example: Consider the case with true and modeled dynamics given respectively by:xt+1=F(xt, ut) =axt+butand xt+1=ˆF(xt, ut) = ˆaxt+ˆbut, (9)4Algorithm 1 Policy Learning with Approximate Physical Models1:Initialize Time horizon T∈N, number of samples per update N∈N, number of iterationsK∈N, step sizes {αk}N−1k=0and initial policy parameters θ1∈Θ2:foriterations k= 1,2, . . . , K do3: Sample Ninitial conditions {xi0}Ni=1∼ DN4: fori= 1,2, . . . , N do5: Unroll xi={φθkt(xi0)}Tt=0on (2) with πθkt6: Estimate ˆgNT(θk)using (8) and trajectories {xi}Ni=17: Update θk+1=θk+αkˆgNT(θ)where a,ˆa, b,ˆb >0andxt, ut∈R. Suppose we optimize over policies of the form ut=πθt(xt) = ̄utwhere θ= ( ̄u0, ̄u1, . . . , ̄uT−1)∈RTare the policy parameters. In this case, the policy param-eters{ ̄ut}T−1t=0specify a sequence of open-loop control inputs applied to the system. Retaining theconventions developed above, along every choice of { ̄ut}T−1t=0and the resulting trajectory {xt}Tt=0we have At=a,Bt=b,ˆAt= ˆa,ˆBt=ˆbandKt= 0, and thus we have Φt,t′=at−t′−1andˆΦt,t′= ˆat−t′−1. When a,ˆa >1, the system (and model) are passively unstable [27, Chapter 5], andsmall changes to the policy compound over time, as captured by and ∥Φt,t′∥and∥ˆΦt,t′∥growingexponentially with the difference t−t′, along with the formula for the gradients (5).5.1 Exploding Model-BiasRecall that the aforementioned estimator for ∇JT(θ;x0)only introduces error in the term∂xt∂θ≈d∂xt∂θand in particular Φt,t′Bt′≈ˆΦt,t′ˆBt′along the resulting trajectory. We will seek to understand howthe point-wise errors in the derivatives of the model ∆Aclt:=ˆAclt−Acltand∆Bt:=ˆBt−Btpropagate over time. Towards this end we manipulate the following difference:ˆΦt,t′ˆBt′−Φt,t′Bt′= Φ t,t′ˆBt′+ ∆Φ t,t′ˆBt′−Φt,t′Bt′= Φ t,t′∆Bt′+ ∆Φ t,t′ˆBt′ (10)= Φ t,t′∆Bt′+t−1Xs=t′+1Φt,s∆AclsˆΦs−1,t′ˆBt′,The last equality in (10) provides a clear picture of how inaccuracies in the derivatives of the modelare propagated over time. For example, when approximating ˆΦt,t′ˆBt,t′≈Φt,t′Bt′the error ∆Bt′ismagnified by Φt,t′, while the error ∆Aclt′+1is magnified by Φt,t′+1.Running Example: Continuing with the scalar example, in this case we have ∆Bt=ˆb−band∆Aclt= ˆa−a. Moreover, using the preceding calculations, we have ˆΦt,t′ˆBt′−Φt,t′Bt′=at−t′(ˆb−b)+Pt−1s=t′+1at−s−1ˆas−t′−1b(ˆa−a). Thus, when a,ˆa >1and the system is unstable, the errors inderivatives of the model are magnified exponentially over long time horizons when computing thesensitivity estimate∂xt∂θ≈d∂xt∂θand ultimately the gradient estimate ∇JT(θ;x0)≈\∇JT(θ;x0).5.2 Exploding VarianceWe next illustrate how unstable dynamics can lead our batch estimator ˆgNTto explode over long timehorizons Tunless a large number of samples Nare used.Running Example: Consider the case where r(xt) =−12∥xt∥22and the initial state distribution isDuniform over the interval [−1,1]. Consider the case where we apply θ= ( ̄u1, . . . , ̄uT−1) =(0, . . . , 0)so that no control effort is applied. In this case, for every initial condition x0, theresulting state trajectory is given by xt=atx0, and thus our estimate for the gradient is∇JT(θ;x0) =PT−1t=0(atx0)·Pt−1t′=0ˆat−tb. Moreover, by inspection we see that the averageof the estimator is E[ˆgNT(θ;{x0}Ni=1)] =EPNi=1bJT(θ;x0)] = 0 and thus the variance ofthe estimator is1NE[∥ˆgNT(θ;{x0}Ni=1)−EPNi=1bJT(θ;x0)]∥2] =1NE[∥ˆgNT(θ;{x0}Ni=1)∥2] =1N∥PT−1t=0(atx0)·Pt−1t′=0ˆat−t′b∥2, a quantity which grows exponentially with the horizon T >0.55.3 Rapidly Fluctuating GradientsLetf:Rq→Rbe a potentially non-convex and twice differentiable objective, such as the onesconsidered in this paper. In this general setting, well-established results for gradient-based methodscharacterize the rate of convergence to approximate stationary points of the underlying objective,namely, points z∈Rqsuch that ∥∇f(z)∥2< εfor some desired tolerance ε >0. A key quantitywhich controls this rate is the smoothness of the underlying objective, which is typically character-ized by assuming the existence of a constant L >0such that ∥∇f(z1)−∇f(z2)∥< L∥z1−z2∥foreachz1, z2∈Rq. When the constant Lis very large, the gradient can fluctuate rapidly, and smallstep-sizes may be required to maintain the stability of gradient-based methods [18], slowing the rateof convergence for these approaches. Many analyses control these fluctuations using the Hessian ofthe objective by setting L:= supz∈Rq∥∇2F(z)∥i,2, where ∥ · ∥i,2is the induced 2-norm.Below, our main theoretical results will bound the magnitude of ∇2J(θ), characterizing the smooth-ness of the underlying policy optimization problem and illustrating the benefits of embedded low-level controllers. We demonstrate how to derive an expression for the Hessian in the Appendix, butprovide here a concrete illustration of how it can grow exponentially for unstable systems:Running Example: Consider the case where the quadratic reward r(xt) =−12∥xt∥22is appliedto our example scalar system. For every initial condition x0and choice of policy parametersθ= ( ̄u1, . . . , ̄uT−1)by inspection we have xt=atx0+Pts=0at−sb ̄us, so that the overall ob-jective is concave and given by J(x0;θ) =PTt=0Pt−1s=0−∥atx0+Pts=0at−sb ̄us∥. The Hes-sian of the objective can be calculated directly; in particular the diagonal entries are given by∂2∂ ̄u2t=PTs=t+1as−tb. This demonstrates that ∥∇2J(x0, θ)∥ ≥ |∂2∂ ̄u2t|2grows exponentially intime horizon. From the discussion above, this implies that policy gradient methods will be veryslow to converge to optimal policies.6 Embedding Low-Level Feedback into the Policy ClassWe now demonstrate how we can overcome the these pathologies by using the model to designstabilizing low-level feedback controllers which are then embedded into the policy class.Running Example: Let us again consider the simple scalar system and model we have studiedthus far, but now suppose we use the model to design a proportional tracking controller of the formut=k( ̄xt−xt), where { ̄xt}Tt=0represents a desired trajectory we wish to track and k > 0isthe feedback gain. We then embed this controller into the overall policy class by choosing theparameters to be θ= ( ̄x0, ̄x1, . . . , ̄xt)so that ut=πθt(xt) =k( ̄xt−xt). Here, the parametersof the control policy specify the desired trajectory the low-level controller is tasked with tracking.In this case, along each trajectory of the system we will now have Aclt=a−bk,ˆAclt= ˆa−ˆbk,Bt=bandˆBt=b. If the gain k >0is chosen such that |a−bk|<1and|ˆa−ˆbk|<1, then thetransition matrices ˆΦt,t′= (ˆAclt)t−t′−1andΦt,t′= (Aclt)t−t′−1will both decay exponentially withthe difference t−t′. Thus, by optimizing through a low-level tracking controller designed with themodel we have reduced the sensitivity of trajectories to changes in the controller parameters.Remark 3. In practice, we may select a control architecture as in Fig. 1 where our parameters arethose of a neural network which corrects a desired trajectory and low-level controller. The naturalgeneralization of the damping behavior displayed by the proportional controller above is that thelow-level controller is incrementally stabilizing , which means that for every initial condition x0andθ∈Θwe will have ∥Φt,t′∥ ≤ Mαt−t′. There are many systematic techniques for synthesizingincrementally stabilizing controllers using a dynamical model from the control literature [27, 28].We are now ready to state our main result, which demonstrates the benefits using the model to designthe policy gradient estimator and embedded feedback controller:Theorem 1. Assume that 1)the first and second partial derivatives of Rt,πθt,FandˆFare bounded,2)there exists a constant ∆>0such that for each x0∈ X andu∈ U the error in the modelderivatives are bounded by max{∥∂∂xF(x, u)−∂∂xˆF(x, u)∥,∥∂∂uF(x, u)−∂∂uˆF(x, u)∥}<∆and63)the policy class {πθt}θ∈Θhas been designed such that exists constants M, α > 0such that foreach x0∈ X ,θ∈Θ, and t > t′we have: max{∥Φt,t′∥,∥ˆΦt,t′∥}< Mαt′−t. Letting ̄gT(θ) =E[ˆgNT(θ;{xi0}Ni=1)]denote the mean of our gradient estimator, there exist scalars C, W, K > 0suchthat the bias and variance of our policy gradient estimator are bounded as follows:∥∇J T(θ)− ̄gT(θ)∥ ≤CT2αT∆ ifα >1CT2∆ ifα= 1CT∆ ifα <1,E∥ˆgNT(θ)− ̄gT(θ)∥2≤WT4α2TNifα >1WT4Nifα= 1WT2Nifα <1.Moreover, the smoothness of the underlying policy optimization problem is characterized via:∥∇2JT(θ)∥2≤KT4α3Tifα >1KT4ifα= 1KT ifα <1.Proof of the result can be found in the supplementary material. The result formalizes the intuitionbuilt with our example: when the system is passively unstable (and we can have α >1), the core al-gorithmic challenges introduced above can arise. However, embedding a (incrementally stabilizing)low-level tracking controller into the policy class can overcome these pathologies ( α≤1). Notethat the condition max{∥Φt,t′∥,∥ˆΦt,t′∥}< Mαt′−tin the statement of Theorem 1 requires that thestabilizing controller (which has been designed for the model) is stabilizing the real-world system.This is a reasonable condition, as under mild conditions stabilizing controllers are known to be ro-bust to moderate amounts of model uncertainty [27]. However, it is an interesting matter for futurework to characterize the amount of model-mismatch our approach can handle without model-biasexploding over long time horizons.7 Experimental ValidationFor each experiment we use a policy structure per Fig. 1 which embeds low-level feedback that aimsto stably track reference trajectories; a formal definition of this structure is given in Appendix B.1.NVIDIA JetRacer: We begin by hardware-testing our approach on an NVIDIA JetRacer 1/10thscale high-speed car using the following simplified dynamics model:xt+1yt+1vt+1φt+1=xt+vtcos (φt) ∆tyt+vtsin (φt) ∆tvt+at∆tφt+vtωt∆t, (11)where ∆t >0is the discrete time-step, (xt, yt, φt)∈SE(2)are the Cartesian coordinates andheading angle of the car, vt>0is the forward velocity of the car in its local frame, and (at, ωt)∈U= [0,1]×[−1,1]are the control inputs where atis the throttle input percentage and ωtis thesteering position of the wheels. We note that this model makes several important simplifications:(i) drag is significant on the actual car, but is missing from (11); (ii) proper scaling of the controlinputs (at, ωt)has been omitted; (iii) the actual car has noticeable steering bias, and does not followa straight line when ωt= 0; and (iv) physical quantities such as time-delays in the motor are ignored.The task consists of tracking a figure-8 made up of two circles, 3 meters in diameter, with a nominallap time of 5.5 s. We implement a backstepping-based tracking controller [27, Ch. 6] for low-level control. As shown in Fig. 1 this controller alone does not ensure accurate tracking, due toinaccuracies in the model used to design it. We train a policy with 2.2 min of real world data over 8iterations, each 16.5 slong and see a clear improvement in tracking performance (Fig. 1).Next, we use a high fidelity simulator of the car to benchmark our approach against state-of-the-artreinforcement learning algorithms in Figure 2. All methods optimize over the feedback control ar-chitecture described above and therefore were trained using the same action space as our approach.We compare to the model-based approaches MBPO [13] and SVG [22] and the model-free ap-proaches SAC [8] and PPO [9]. Each of these approaches learns about the dynamics of the system7Figure 2: (Left) Training curves for different algorithms applied to a high-fidelity simulation model of an RCcar. (Right) One lap of the quadruped around the figure-8 task with corrected waypoints from a neural network.from scratch; thus, it is unsurprising that our approach converges more rapidly as it exploits knownphysics represented by the model. The use of feedback enables us to take this approach and ob-tain a high-performing controller, even though the model we use is highly inaccurate, overcomingmodel-bias. Additional details for the benchmark experiment can be found in Appendix B.Go1 Quadrupedal Robot: We also replicate the figure-8 tracking experiment on a Unitree Go1 Eduquadrupedal robot to demonstrate the effectiveness of our approach when using a highly simplifiedmodel. The Go1 is an 18-degree-of-freedom system which we control in a hierarchical manner. Atthe lowest level, a joint control module generates individual motor torques to actuate the robot’slimbs to desired angles and velocities. At the next layer, a kinematic solver converts desired footplacements to joint angles. A gait generation module determines trajectories of foot placementsfrom high-level linear and angular velocity commands issued to the robot. We provide these high-level commands to the Go1 via Unitree’s ROS interface [29], as outputs from a backstepping-basedcontroller that was formulated using the following simplified dynamical model of the system:xt+1yt+1φt+1=xt+vtcos (φt) ∆tyt+vtsin (φt) ∆tφt+ωt∆t, (12)where (xt, yt)are the Cartesian coordinates of the base of the robot on the ground plane and φtisits heading. The two inputs to the model are the desired forward velocity vtand the desired turningratewt. Note that this is an extremely simplified model for the system, with a dynamic structuresimilar to the model for the car used in the previous example. Setting a nominal lap time of 37.7 s,we trained the policy using 5.9 min of real-world data over 7 iterations, each 50.9 slong. Eventhough we used a highly simplified model for the dynamics, we again see a clear improvement inperformance after training (cf. Fig. 2).8 LimitationsOur approach successfully learns high-performance control policies using limited data acquired onphysical systems. A key enabler to this end is the embedding of stabilizing low-level feedbackwithin the policy class and the use of an a priori physics-based model. However, there are severalkey limitations. First, for situations such as contact-rich manipulation, it may not be clear how todesign a controller with the required (incremental) stability property or that can incorporate neces-sary perceptual observations. Future work may address this limitations by incorporating techniquesfor learning stabilizing controllers (e.g., the Lyapunov methods of [30, 31]) or by working with la-tent state representations learned from vision modules. Additionally, while our method is highlysample-efficient, it does not take advantage of many established techniques from the reinforcementlearning literature, such as value function learning and off policy training, leaving many directionsfor algorithmic advances. One particularly interesting direction is to combine the proposed approachwith emerging model-based reward shaping techniques [32, 33].8References[1] Z. Li, X. Cheng, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath. Reinforce-ment learning for robust parameterized locomotion control of bipedal robots. In 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 2811–2817. IEEE, 2021.[2] J. Dao, K. Green, H. Duan, A. Fern, and J. Hurst. Sim-to-real learning for bipedal locomotionunder unsensed dynamic loads. In 2022 International Conference on Robotics and Automation(ICRA) , pages 10449–10455. IEEE, 2022.[3] J. Tan, T. Zhang, E. Coumans, A. Iscen, Y . Bai, D. Hafner, S. Bohez, and V . Vanhoucke. Sim-to-real: Learning agile locomotion for quadruped robots. arXiv preprint arXiv:1804.10332 ,2018.[4] A. Nagabandi, I. Clavera, S. Liu, R. S. Fearing, P. Abbeel, S. Levine, and C. Finn. Learningto adapt in dynamic, real-world environments through meta-reinforcement learning. arXivpreprint arXiv:1803.11347 , 2018.[5] A. Nagabandi, G. Kahn, R. S. Fearing, and S. Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE InternationalConference on Robotics and Automation (ICRA) , pages 7559–7566. IEEE, 2018.[6] K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handfulof trials using probabilistic dynamics models, 2018.[7] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to pol-icy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465–472. Citeseer, 2011.[8] T. Haarnoja, S. Ha, A. Zhou, J. Tan, G. Tucker, and S. Levine. Learning to walk via deepreinforcement learning. arXiv preprint arXiv:1812.11103 , 2018.[9] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimizationalgorithms. CoRR .[10] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policygradient algorithms. In International conference on machine learning , pages 387–395. Pmlr,2014.[11] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization.InInternational conference on machine learning , pages 1889–1897, 2015.[12] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y . Tassa, D. Silver, and D. Wierstra.Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 , 2015.[13] M. Janner, J. Fu, M. Zhang, and S. Levine. When to trust your model: Model-based policyoptimization. Advances in neural information processing systems , 32, 2019.[14] P. Abbeel, M. Quigley, and A. Y . Ng. Using inaccurate models in reinforcement learning. InProceedings of the 23rd international conference on Machine learning , pages 1–8, 2006.[15] L. Metz, C. D. Freeman, S. S. Schoenholz, and T. Kachman. Gradients are not all you need.arXiv preprint arXiv:2111.05803 , 2021.[16] H. J. Suh, M. Simchowitz, K. Zhang, and R. Tedrake. Do differentiable simulators give betterpolicy gradients? In International Conference on Machine Learning , pages 20668–20696.PMLR, 2022.[17] P. Parmas, C. E. Rasmussen, J. Peters, and K. Doya. Pipps: Flexible model-based policysearch robust to the curse of chaos. In International Conference on Machine Learning , pages4065–4074. PMLR, 2018.9[18] D. P. Bertsekas and J. N. Tsitsiklis. Gradient convergence in gradient methods with errors.SIAM Journal on Optimization , 10(3):627–642, 2000.[19] R. S. Sutton. Integrated modeling and control based on reinforcement learning and dynamicprogramming. Advances in neural information processing systems , 3, 1990.[20] J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee. Sample-efficient reinforcementlearning with stochastic ensemble value expansion. Advances in neural information processingsystems , 31, 2018.[21] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep q-learning with model-basedacceleration. In International conference on machine learning , pages 2829–2838. PMLR,2016.[22] N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y . Tassa. Learning continuous controlpolicies by stochastic value gradients. Advances in neural information processing systems , 28,2015.[23] N. Amann, D. H. Owens, and E. Rogers. Iterative learning control using optimal feedback andfeedforward actions. International Journal of Control , 65(2):277–293, 1996.[24] H.-S. Ahn, Y . Chen, and K. L. Moore. Iterative learning control: Brief survey and catego-rization. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications andReviews) , 37(6):1099–1121, 2007.[25] J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning quadrupedal locomo-tion over challenging terrain. Science robotics , 5(47):eabc5986, 2020.[26] T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V . Koltun, and M. Hutter. Learning robust per-ceptive locomotion for quadrupedal robots in the wild. Science Robotics , 7(62):eabk2822,2022.[27] S. Sastry. Nonlinear systems: analysis, stability, and control , volume 10. Springer Science &Business Media, 1999.[28] I. R. Manchester and J.-J. E. Slotine. Control contraction metrics: Convex and intrinsic criteriafor nonlinear feedback design. IEEE Transactions on Automatic Control , 62(6):3046–3053,2017.[29] U. Robotics. A1. URL https://www.unitree.com/products/a1/ .[30] J. Z. Kolter and G. Manek. Learning stable deep dynamics models. Advances in neural infor-mation processing systems , 32, 2019.[31] H. Ravanbakhsh and S. Sankaranarayanan. Learning control lyapunov functions from coun-terexamples and demonstrations. Autonomous Robots , 43:275–307, 2019.[32] T. Westenbroek, F. Casta ̃neda, A. Agrawal, S. S. Sastry, and K. Sreenath. Learning min-normstabilizing control laws for systems with unknown dynamics. In 2020 59th IEEE Conferenceon Decision and Control (CDC) , pages 737–744. IEEE, 2020.[33] T. Westenbroek, F. Castaneda, A. Agrawal, S. Sastry, and K. Sreenath. Lyapunov designfor robust and efficient robotic reinforcement learning. In 6th Annual Conference on RobotLearning , 2022. URL https://openreview.net/forum?id=Ef7xodOrgNW .[34] K. M. Lynch and F. C. Park. Modern robotics . Cambridge University Press, 2017.[35] R. W. Beard. Quadrotor dynamics and control. Brigham Young University , 19(3):46–56, 2008.10A ProofsThis appendix contains proofs of claims that were omitted in the main document and several support-ive Lemmas. Section A.1 provides the derivation for Proposition 1, Section A.2 states and formallyderives the reverse-time representation of the gradient, while Section A.3 builds on this calculationto derive a representation for the second variation, which is subsequently use to bound the Hessian.Finally, Section A.5 contains the auxiliary lemmas.A.1 Proof of Proposition 1The expression for ∇JT(x0;θ)follows directly from the chain rule. To obtain the expression for∂xt∂θwe differentiate the dynamics xt+1=F(xt, ut)to yield:∂xt+1∂θ=∂∂xF(xt, ut)·∂xt∂θ+∂∂uF(xt, ut)·∂ut∂θ=Aclt∂xt∂θ+Bt∂πθt∂θ,where the second equality is obtained by noting that:∂ut∂θ=∂πθt∂θ+∂πθt∂x·∂xt∂θ=∂πθt∂θ+Kt·∂xt∂θ.The desired expression is then obtained by unrolling the recursion and noting that∂xt∂θ= 0.A.2 Efficient Backwards Pass for Policy Gradient ComputationWhile the form for the policy gradient (5) and our model-based approximation in (7) will proveconvenient for analysis, computing the many approximate sensitivity terms∂xt∂θ—and in particulartheΦt,t′terms—is highly complex and requires many forwards passes along the trajectory. Inpractice, we can more efficiently compute the approximate gradient as follows:Proposition 2. For each x0∈ X andθ∈Θthe policy gradient can be calculated via:∇JT(θ;x0) =T−1Xt=0pt+1Bt+∇Rt(xt)·∂πθt∂θ,where (13)pt=pt+1(ˆAt+ˆBtKt) +∇Rt(xt)and pT=∇RT(xt). (14)Here, the recursion with the variables pt∈R1×nperforms ‘back propagation through time’ alongthe real-world trajectory using the derivatives of the model.Proof. As before, let {xt}Tt=0and{ut}T−1t=0denote the state trajectory that results from applying thepolicy πθfrom x0.Permitting a slight abuse of notation, we can re-write the cost by moving the dynamics constraintsinto the cost and weighting them with Lagrange multipliers:J(θ;x0) =T−1Xi=0Rt(xt) +pt+1xt+1−F(xt, πtθ(xt))(15)Define the HamiltonianHt(xt, pt+1, θ) =pt+1F(xt, πtθ(xt)) +Rt(xt), (16)and note that we may then re-write the cost as:J(θ;x0) =RT(xT) +⟨pT, xT⟩+⟨p0, x0⟩+T−1Xt=0ptxt−Ht(xt, pt+1, θ) (17)11To reduce clutter below we will frequently omit the arguments from Ht, since it is clear that themap is evaluated at (xt, pt+1, θ). Let δθ∈Rpbe a variation on the policy parameters and letδxt=∂φtθ∂θδθdenote the corresponding first variation of the state. To first order, the change in thecost corresponding to these variations is:δJ|θ(δθ) =⟨∇QT(xT) +pT, δxT⟩+T−1Xt=0⟨pt− ∇ xHt, δxt⟩ − ⟨∇ θHt, δθ⟩. (18)To simplify the expression, let us make the following choices for the multipliers:pT=∇RT(xT) (19)p⊤t=∇xHt(xt, pt+1, θ) (20)=p⊤t+1∂∂xF(x, πtθ(x)) +∇Rt(xt) (21)=p⊤t+1∂∂xAt+∇Rt(xt) (22)where we have applied the short-hand from developed in Section 3 for the particular task. Pluggingthis choice for the multipliers into (18) causes the δxtterms to vanish and yields:δJ|θ(δθ) =t−1Xt=0⟨∇θHt, δθ⟩ (23)=⟨p⊤t+1∂∂uF(x, πtθ)∂πtθ∂θ+∇Rt(xt)∂πtθ∂θ, δθ⟩ (24)=T−1Xt=0⟨p⊤t+1Bt+rt,∂πtθ∂θδθ⟩ (25)Since this calculation holds for arbitrary δθthis demonstrates that the gradient of the objective isgiven by:∇θJ(θ, x0) =T−1Xt=0⟨p⊤t+1Bt+rt,∂πtθ∂θ⟩. (26)A.3 Calculating the Second VariationTo calculate the Hessian of the objective we continue the Lagrange multiplier approach discussedabove. Now let δ2xtdenote the second order variation in the state with respect to the perturbationδθ. By collecting second order terms in (17) the attendant second-order variation to the cost is givenby:δ2J|θ(δθ) =⟨δx⊤t∇2RT(xT), δxt⟩+⟨∇RT(xT) +pT, δ2xT⟩ (27)+T−1Xt=0⟨pt− ∇ xHt, δ2xt⟩+⟨δx⊤t∇2xxHt(xt), δxt⟩+ 2⟨δxt∇2xθHt, δθ⟩+⟨δθ⊤∇2θθHt, δθ⟩(28)By using the choice of costate introduced above, this time the second order state variations δ2xtvanish from this expression so that we arrive at:δ2J|θ(δθ) = +T−1Xt=0⟨δx⊤t∇2xxHt(xt), δxt⟩+ 2⟨δxt∇2xθHt, δθ⟩+⟨δθ⊤∇2θθHt, δθ⟩. (29)12By unravelling this expression, we observe that:∇2JT(θ;x0) =∂xT∂θ⊤· ∇2RT(xT)·∂xT∂θ+T−1Xt=0∂xt∂θ⊤·∂2∂x2Ht(xt, pt, θ)·∂xt∂θ+ 2T−1Xt=0∂xt∂θ⊤·∂2∂x∂θHt(xt, pt+1, θ)+T−1Xt=0∂2∂θ2Ht(xt, pt+1, θ),which, for the purposes of our analysis, we note does not depend on second variations of the state.A.4 Restatement of Main Result and ProofTheorem 1. Assume that the first and second partial derivatives of Rt,πθt,FandˆFare bounded.Further assume that there exists a constant ∆>0such that for each x0∈ X andu∈ U the errorin the model derivatives are bounded by max{∥∂∂xF(x, u)∥,∥∂∂uF(x, u)∥}<∆. Finally, assumethat the policy class φθthas been designed such that exists constants M, α > 0such that for eachx0∈ X,θ∈Θ, and t > t′we have: max{∥Φt,t′∥,∥ˆΦt,t′∥}< Mαt′−t. Then we may bound thebias and variance of our policy gradient estimator as follows:∥∇J T(θ)− ̄gT(θ)∥ ≤CT2αT∆ ifα >1CT2∆ ifα= 1CT∆ ifα <1,E∥ˆgNT(θ)− ̄gT(θ)∥2≤WT4α2TNifα >1WT4Nifα= 1WT2Nifα <1.Moreover, the smoothness of the underlying policy optimization problem is characterized via:∥∇2JT(θ)∥2≤KT4α3Tifα >1KT4ifα= 1KT ifα <1.Proof. We first bound the bias of the gradient:∥∇J T(θ)− ̄gT(θ)∥=∥E[∇JT(θ;x0)−ˆgT(θ;x0)]∥≤E[∥∇JT(θ;x0)−ˆgT(θ;x0)∥]≤sup∥∇JT(θ;x0)−ˆgT(θ;x0)∥,where the preceding expectations are over x0∼D. The desired bound on the bias directly followsby applying the bound on gradient errors from Lemma 2 below.Next, to bound the variance estimate note that:E[∥ˆgNT(θ)− ̄gT(θ)∥2] =1N2PNi=1E[∥ˆgT(θ;x0)− ̄gT(θ)∥2]≤1Nsup∥ˆgT(θ;x0)− ̄gT(θ)∥2≤4Nsup∥ˆgT(θ;x0)∥2,where the first expectation is over (xi0)Ni=1∼ DN, the second is with respect (x0)∼ D. The desiredbound on the variance follows via a direct application of Lemma 1 which provides a uniform upper-bound on the gradient estimates.Similar to before we have:∥∇2JT(θ)∥ ≤E(x0)∼D[∥∇2JT(θ;x0)∥]≤sup(x0)∈D∥∇2JT(θ;x0)∥.The desired bound follows from Lemma 3, which uniformly bounds the task-specific Hessians.13A.5 Supportive LemmasLemma 1. Let the Assumptions of Theorem 1 hold. Then there exists β > 0independent of theparameters T∈N,Mandα∈Rsuch that for each x0∈Dandθ∈Θwe have:∥∇θJT(θ;x0)∥ ≤βT2αTifα >1βT2ifα= 1βT ifα <1.Proof. Let the constant L > 0be large enough so that it upper-bounds the norm of the first andsecond partial derivatives of Rt,πθt,FandˆF. Fix a specific task x0and set of policy parameters θand let At, Bt, Ktbe defined along the corresponding trajectory as usual.Recall from Section 3 that∇JT(θ;x0) =T−1Xt=0pt+1Bt+∇R(xt)·∂πθt∂θ,where the co-state pt∈R1×nis given by:pt=T−1Xs=t+1∇R(xt)·Φs,t,by inspection. Thus, we may upper-bound the growth of the co-state as follows:∥pt∥ ≤LMαT−t+T−1Xs=t+1(L+L2)Mαs−t. (30)By carrying out the summation, we observe that there exists C1>0sufficiently large such that∥pt∥ ≤C1TαTifα >1C1T ifα= 1C1 ifα <1,(31)where we have used the fact thatPT−1s=t+1Mαs−t< M11−αfor the third case. We can bound theoverall gradient as follows:∥∇JT(θ;x0)∥=T−1Xt=0LL∥pt+1∥+L, (32)which when combined with the bound on the costate above demonstrates the desired result for someconstant β >0sufficiently large to cover all choices of x0.Lemma 2. Let the Assumptions of Theorem 1 hold. Then there exists C >0independent of T∈N,M,∆A,∆B>0andα >0such that for each x0∈Dandθ∈Θwe have:∥∇θJT(θ;x0)−ˆgT(θ;x0)∥ ≤CT3αT∆ ifα >1CT3∆ ifα= 1CT2∆ ifα <1,where ∆ = min {∆A,∆B}.Proof. Let the constant L > 0be large enough so that it upper-bounds the norm of the first andsecond partial derivatives of Rt,πθt,FandˆF. Fix a specific task x0and set of policy parameters θand let At, Bt, Ktas usual.14Using equations (7) and (10) we obtain:∥∇JT(θ;x0)−ˆgT(θ, x0)∥=∥TXt=1∇R(xt)·tXt′=0(Φt,t′Bt′−ˆΦt,t′ˆBt′)∥≤TXt=1∥∇R(xt)∥ ·tXt′=0∥Φt,t′∆Bt′+t−1Xs=t′+1Φt,s∆AclsˆΦs−1,t′ˆBt′∥≤TXt=1LtXt′=0Mαt−t′∆ +t−1Xs=t′+1Mαt−s∆Mαs−t′L.Note that the preceding analysis holds for any choice of θandx0. Thus, noting thatt−1Xs=t′+1Mαt−s∆Mαs−t′< M211−α∆in the case where α <1, leveraging the preceding inequality we can easily conclude that there existsC >0sufficiently large such that for each θandx0we have:∥∇θJT(θ;x0)−ˆgT(θ;x0)∥ ≤CT3αT∆ ifα >1CT3∆ ifα= 1CT2∆ ifα <1,which demonstrates the desired result.Lemma 3. Let the Assumptions of Theorem 1 hold. Then there exists K > 0independent of T∈N,Mandα∈Rsuch that for each x0∈Dandθ∈Θwe have:∥∇2θJT(θ;x0)∥ ≤KT4α3Tifα >1KT4ifα= 0KT ifα <1.Proof. Let the constant L > 0be large enough so that it upper-bounds the norm of the first andsecond partial derivatives of Rt,πθt,FandˆF. Fix a specific x0and set of policy parameters θ.Recall from that the Hessian can be calculated as follows:∇2JT(θ;x0) =∂xT∂θ⊤· ∇2RT(xT)·∂xT∂θ+T−1Xt=0∂xt∂θ⊤·∂2∂x2Ht(xt, pt, θ)·∂xt∂θ+ 2T−1Xt=0∂xt∂θ⊤·∂2∂x∂θHt(xt, pt+1, θ)+T−1Xt=0∂2∂θ2Ht(xt, pt+1, θ).Using the assumptions of the theorem, we observe that there exists a constant C1>0sufficientlylarge such thatmax{∂2∂x2Ht(xt, pt, θ),∂2∂x∂θHt(xt, pt+1, θ),∂2∂x∂θHt(xt, pt+1, θ)} ≤C1(∥pt+1∥+ 1) (33)and∥∇2JT(θ;x0)∥=L∥∂xT∂θ∥2+T−1Xt=0C1(∥pt+1∥+ 1)h∥∂xT∂θ∥2+∥∂xt∂θ∥+ 1i(34)holds for all choices of x0andθ.15Using our preceding analysis, we can bound the derivative as the state trajectory as follows:∥∂xt∂θ∥=∥t−1Xt′=0Φt,t′Bt′∂πθt∂θ∥≤t−1Xt′=0L2Mαt−t′This demonstrates that there exists C2>0sufficiently large such that:∥∂xt∂θ∥ ≤C2TαTifα >1C2T ifα= 1C2 ifα <1,(35)where in the case where α <1we have used the fact thatPt−1t′=0Mαt−t′< M11−α. Combining theprevious bounds (33), (31) and (34) then demonstrates the desired result.B Additional Experiments and DetailsHere, we provide additional simulation experiments and details for experiments presented in themain paper.B.1 Policy Structure for ExperimentsPer Section 6, for each experiment, we construct our policy (Fig. 1) around a low-level controllerμ:X × X × Rk→ U , which produces control inputs ut=μ(xt, xdest, Gt)to stably track ref-erence trajectories {xdest}Tt=0with controller gains Gt∈Rk. A task ψ:R→ X produces thereference trajectory xdest=ψ(t), however tracking is poor due to mismatch in dynamic model-ing when forming the controller. To this end, a neural network NNθ:Rp× X → Rk×Rnwith parameters θ∈Θgenerates corrections to the controller gains and to the reference trajec-tory(∆Gt,∆xdest) = ( NN1θ(ξ(t), xt), NN2θ(ξ(t), xt)) = NNθ(ξ(t), xt), where ξ:R→Rpencodes task objectives. Our ultimate policy class is of the form πθ(t, xt; ̄G) = μ(xt, ψ(t) +NN2θ(ξ(t), xt), ̄G+NN1θ(ξ(t), xt))where ̄Gis a nominal set of feedback gains. Unless otherwisespecified, the neural network is a 64×64multilayer perceptron with tanh(·)activations. For eachof the benchmarks in Appendices B.4 to B.6, all methods use the same low-level feedback con-troller (as described in their respective sections) and the policy structure as described in this section.Therefore, all methods were trained using the same action space.B.2 The Benefit of Low-Level FeedbackIn this experiment, we compare the policy class of Fig. 1 against a policy class in which a neuralnetwork directly determines open-loop control inputs (as in Section 5, omitting a low-level stabiliz-ing controller). We use the double pendulum model from [34], and the task requires moving the endeffector to a desired location, using a reward function based on Euclidean distance. The followingparameters were used for training: Random seeds: 64, Episodes: 50, Episode length: 300, Policycalls per episode: 30, Epochs: 50, Batch size: 5. First experiment: We provide the true dynam-ics to both approaches to observe the variance and conditioning, independent of model-mismatch.Training curves for the best learning rate for each approach are depicted in Fig. 3a, which supportsthe our main theoretical findings. Second Experiment: Next we feed Algorithm 1 an approximatemodel that contains pendulum masses and arm lengths that are 70% and 90% of the actual values, re-spectively. As shown in Fig. 3b, the unstable dynamics lead to significant model bias which limitedthe asymptotic performance of the naive controller without embedded feedback controller.B.3 NVIDIA JetRacer Actor Network OutputsWe now examine neural network outputs during a single execution of the figure-eight task for theNVIDIA JetRacer hardware experiment, depicted in Fig. 4. We see that the neural network issues16(a) Without model mismatch (b) With model mismatchFigure 3: Training curves for the double pendulum experiment. Embedding low-level feedback results in betterperformance both with and without model mismatch.Figure 4: One lap of the car around the figure-8 task before/after training and neural network outputs.corrections on the outside of the track, which is reasonable considering the untrained car was track-ing the inside of the track. We note the following controller gains adjustments from the neuralnetwork: (i) an overall negative value selected for the feedforward steering gain ∆Kωcounteractsthe car’s inherent steering bias in the positive steering direction; (ii) lower values of forward veloc-ity gain ∆Kvwere selected when crossing the origin, allowing the car to more closely track at thiscritical point; and (iii) elevated values of ∆Kvare selected to speed up the car for the rest of thetrack, increasing reward.B.4 Parameters for Simulated Car BenchmarksHere, we report the details for the simulated car benchmark reported in Figure 2 in the main doc-ument. For each algorithm the episode length is 300 steps of the environment, and the simulationstep was 0.1 seconds. For each method, a total of 10 random seeds was run and the actor networkwas a 64×64feedforward network defining the splines tracked by the low-level controller. Furtherdetails for each tested method are:Our Method : Learning rate: 0.1, Episodes per iteration: 5. MBPO : Dynamics model: 2 layerfeedforward tanh network (256×256) , Models in ensemble: 5, Learning rate: 1×10−3, Episodesper iteration: 10, Critic Network: 2 layer feedforward tanh network (256×256) .SVG : Dynamicsmodel: 2 layer feedforward tanh network (256×256) , Learning rate: 2.5×10−3, Episodes periteration: 20 , Critic Network: 2 layer feedforward tanh network (256×256) .PPO: Episodes periteration: 25, Learning rate: 1×10−5, Critic Network: 2 layer feedforward tanh network (256×256) .SAC: Episodes per iteration: 5, Learning rate: 1×10−5, Critic Network: 2 layer feedforward tanhnetwork (256×256) .17B.5 Cartpole Simulation BenchmarkIn this benchmark presented in Figure 5, we attempt to track a desired end effector position forthe classic simulated cart-pole environment. In particular, we use a linearizing controller [27] toapproximately track desired positions for the end effector. For each algorithm the episode lengthwas 100, and the simulation step was 0.1 seconds. For each method, a total of 10 random seedswere run and the actor network was a 64×64feed-forward tanh network defining desired splineparameters for the desired trajectory tracked by the low-level controller.Our Method : Learning rate: 0.05, Episodes per iteration: 5, Simplified Model: constructed bydecreasing the mass and friction parameters of the true model by 50 percent. MBPO : Dynamicsmodel: 2 layer feedforward tanh network (256×256) , Models in ensemble: 5, Learning rate: 2.5×10−3, Episodes per iteration: 10, Critic Network: 2 layer feedforward tanh network (256×256) .SVG : Dynamics model: 2 layer feedforward tanh network (256×256) , Learning rate: 1×10−3,Episodes per iteration: 20 , Critic Network: 2 layer feedforward tanh network (256×256) .PPO:Episodes per iteration: 25, Learning rate: 4×10−5, Critic Network: 2 layer feedforward tanhnetwork (256×256) .SAC: Episodes per iteration: 5, Learning rate: 5×10−5, Critic Network: 2layer feedforward tanh network (256×256) .Figure 5: Training curves for different algorithms applied to the cart-pole environment.B.6 Quadrotor BenchmarkNext we conduct a simulation benchmark using the quadrotor dynamics model from [35] and presentthe results in Figure 2. The simulator timestep is 0.1s and each episode is 400 timesteps. The task isto follow a figure-8 pattern in the air. A total of 10 random seeds were run for each method, and foreach algorithm the actor network was a 128×128feed-forward tanh network, which specified thesplines and feedback gains for the tracking controller from [35].Our Method : Learning rate: 0.1, Episodes per iteration: 10, Simplified model: constructed bydecreasing the mass and friction parameters of the true model by 50 percent. MBPO : Dynamicsmodel: 2 layer feedforward tanh network (256×256) , Models in Ensemble: 5, Learning rate: 2.5×10−4, Episodes per iteration: 10, Critic Network: 2 layer feedforward tanh network (256×256) .SVG : Dynamics model: 2 layer feedforward tanh network (256×256) , Learning rate: 1×10−4,Episodes per iteration: 20 , Critic Network: 2 layer feedforward tanh network (256×256) .PPO:Episodes per iteration: 25, Learning rate: 1×10−5, Critic Network: 2 layer feedforward tanh18Figure 6: Training curves for different algorithms applied to the quadrotor environment.network (256×256) .SAC: Episodes per iteration: 5, Learning rate: 3×10−5, Critic Network: 2layer feedforward tanh network (256×256) .B.7 Dynamics Mismatch Study – Simulated CarIn simulation for the car experiment, we study the performance of our approach as model accuracydegrades. We use the following actual dynamics model:xt+1yt+1vt+1φt+1=xt+vtcos (φt) ∆tyt+vtsin (φt) ∆tvt+βaat−cvv2t∆tφt+ (βωωt−bω)vt∆t, (36)where βaandβωrepresent control input scaling for acceleration and turn rate, respectively; cvisthe coefficient of drag; and bωrepresents bias in the car’s steering. The set A:={βa, βω, cv, bω}parameterizes the actual dynamics of the car. We define a mismatch coefficient, γ, which scalesthese parameters to cause mismatch between the actual model and the model used for training. Thatis, we use the set B:={γβa, γβω, γcv, γbω}with Eq. (36) to form our approximate dynamicsmodel ˆF:xt+1yt+1vt+1φt+1=xt+vtcos (φt) ∆tyt+vtsin (φt) ∆tvt+γβaat−γcvv2t∆tφt+ (γβωωt−γbω)vt∆t. (37)Note that if γ= 1, then the two models match exactly.Training Details: We perform training with various values of γ, over 10 random seeds, each with15 training iterations, and present the results in Fig. 7 and Fig. 8. We find that, even in cases of largemodel mismatch, a policy is learned which improves the performance of the car.19Figure 7: Training curves for the simulated car experiment for varying degrees of model mismatch.Figure 8: One lap of the car around the figure-8 task where training was performed with varying degrees ofmodel mismatch.20 |
D0X97ODIYK | Structural Concept Learning via Graph Attention forMulti-Level Rearrangement PlanningManav Kulshrestha, Ahmed H. QureshiDepartment of Computer Science, Purdue UniversityWest Lafayette, IN 47907, United States{mkulshre, ahqureshi }@purdue.eduAbstract: Robotic manipulation tasks, such as object rearrangement, play acrucial role in enabling robots to interact with complex and arbitrary environ-ments. Existing work focuses primarily on single-level rearrangement planningand, even if multiple levels exist, dependency relations among substructuresare geometrically simpler, like tower stacking. We propose Structural ConceptLearning (SCL), a deep learning approach that leverages graph attention net-works to perform multi-level object rearrangement planning for scenes with struc-tural dependency hierarchies. It is trained on a self-generated simulation dataset with intuitive structures, works for unseen scenes with an arbitrary numberof objects and higher complexity of structures, infers independent substructuresto allow for task parallelization over multiple manipulators, and generalizes tothe real world. We compare our method with a range of classical and model-based baselines to show that our method leverages its scene understanding toachieve better performance, flexibility, and efficiency. The dataset, demonstra-tion videos, supplementary details, and code implementation are available at:https://manavkulshrestha.github.io/scl .Keywords: Rearrangement Planning, Robot Manipulation, Graph AttentionFigure 1: Our approach performing progressive pick-and-place (we only show some place steps)actions based on its multi-level rearrangement plan to achieve (middle left) a target arrangementbased on a given goal (top left). A complete figure is available in the Appendix.1 IntroductionRobots operating in the real world will often encounter a variety of simple objects in more com-plex arrangements and structures. To that end, recent years have seen rearrangement planning –which involves the reorganization of objects in a given environment to bring them into a goal state –emerging as a prominent area of research within robotics [1]. Simpler applications of this probleminclude tasks such as setting the table, rearranging furniture, loading a dishwasher, and many more.While manipulation for many real-world tasks performed by robots often reduces to pick-and-place,achieving more structured goals requires compositional interpretation of the target and the executionof long-horizon hierarchical plans. Autonomously solving more complex rearrangement problems,7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.such as constructing a house or mechanical assembly, require robots to interpret internal dependenceamong the different parts of the whole structure and plan accordingly.Despite the importance of these problems, most solutions addressing rearrangement planning largelyconsider target configurations with simpler dependencies such as blocking obstacles [2, 3, 4], stack-ing objects in towers or very simple structures [5, 6, 7, 8], or formulated as bin placement wheresupporting object is stationary [6, 9]; all of which are strict subsets of a more general constructiontask. One of the major hurdles for this task is to decipher the order in which objects need to beplaced so as to properly construct a given target structure since some objects depend on others inthat they geometrically support them.In this paper, we explore a toy scenario of the construction problem with known object primitiveswhich are to be arranged to create some target structure. Our approach takes a multi-view RGB-Dobservation of the target structure and constructs a dependency graph using graph attention net-works. This graph captures the geometrical dependency among the different objects, identifyingindependent substructures for parallelization. Our structure planner then takes this graph and serial-izes task executions based on the current observations of the scene. The low-level controller furthertakes the task sequences and executes the tasks via robot control. Our results show better perfor-mance compared to other classical and model-based planners as well as generalizability to unseenstructures of varying complexity. The main contributions of our approach are as follows:• A generated data set and generation procedure of target scenes containing intuitive structures withdiverse complexity due to the possibility of a variable number of objects, inclusion of structureswith varying levels, and the possibility of independent substructures.• A scalable dependency graph generator that generalizes to structures with multiple levels andvarying numbers of objects.• A structured planner which uses the dependency graph to create a sequential plan for parallelizedmulti-level rearrangement planning, which is integrated into a complete pipeline from scene ob-servations to control executions.2 Related WorkRearrangement. The most relevant area of research to our work would be that of rearrangementplanning, which is a subset of task and motion planning (TAMP). TAMP [10] involves planningfor a robotic agent to operate in an environment with several objects by taking actions to move andchange the state of said objects. Rearrangement – specifically proposed as an important challengefor embodied AI – narrows this by defining itself as the act of bringing a given environment to a goalstate [1]. Various strategies are utilized by systems aiming to address these problems. Some methodspropose the combination of a high-level task planner with a low-level motion planner [11, 12, 13,14]. Others tackle the problem by using sampling-based techniques in conjunction with searchalgorithms [15, 16, 17]. However, all of these focus on a yet narrower subset of rearrangementwhere objects are largely restricted to a 2D workspace and the interplay between objects is largelyignored or not present beyond dealing with clutter.Deep Learning. More recently, advances have been made which utilize deep learning-based ap-proaches toward solving these problems. These allow relaxing the assumptions on objects in-volved by proposing more flexible collision detection [18, 9], better generalization to the real world[19, 20, 21, 4, 6, 2], and unseen environments [7, 2] through vision-based perception. Other methodsutilize semantic information but use it to allow for more general goals like similarity to an inferredtarget distribution [22, 23, 24] or guidance using language [25, 26]. Perhaps most similar to ourmethod, some approaches make use of graph neural networks to model object relations in the scene[8, 5]. However, in all the methods mentioned above, the target scenes are either restricted to asingle-level or are much simpler in terms of their construction, and the spatially weaker relationsallow for less closely dependent substructures.Construction and Assembly. Some works explore the task of assembly or construction [27, 28,29], which allows objects to become fixed to one another that, unlike our case, does not requirethe creation of stable non-collapsing structures. Furthermore, in contrast to our approach, Blocks2Initial Scene Target Scene InitialGraphGenerationStructural PlannerExecution EdgeDecoderNodeEncoderSegmentation Pose AlignmentFigure 2: Model architecture overview. We segment out object point clouds (PointNet++ [30] basedmodel), perform pose alignment, and establish object correspondences (TEASER++ [31] based) be-tween XIandXT. Object level embeddings (another PointNet++ [30] based model) and positionalembeddings (positional encoder [32]) create an initial graph GTwhich our node encoder gΦusesto output higher-level node features. Our edge decoder hΨuses these to create a dependency graphGDfrom which the planner outputs a valid sequence for robot rearrangement.[27] and RoboAssembly [28] also assume perfect information of objects in simulation, makinggeneralization to the real world very difficult and requiring much higher planning steps due to theuse of reinforcement learning. Additionally, while the aforementioned approach to construction[29] also performs long-horizon planning like ours, they restrict their target be one of 4 possiblepredefined structures, though their focus is on multi-robot planning.3 Problem DefinitionLetX={x1, x2, . . . , x n} ⊆Rn×3be the set of all points that can be occupied by a scene, Ξbe thepowerset of X, andO={o1, o2, . . . , o m}be the set of all object instances in any given scene whereeacho∈R6× C withCbeing the set of all classes an object instance can take. Next, we define ascene as two sets of points X, Y∈Ξsuch that X⊆Ywhere Xrepresents the observable scene,whereas Yrepresents the complete scene. Furthermore, we can define a subset selection operator asS∈ S asS: Ξ7→Ξwhere S(X)⊆Xfor all X∈Ξ. The goal is to construct a high-level plannerπH: Ξ×Ξ7→ P that acts on partially observable sets XI, XT∈Ξof the initial and target scene toproduce a hierarchical plan P={(S0, δ0), . . . , (SM, δM)} ∈ P where Si∈ S is a subset selectionoperator, δi∈∆⊆SE(3) is a valid spatial transformation, M∈Nindicates the number of steps,and each ithsteppi= (Si, δi)∈Pcontains the selection and transformation action. Hence, ourobjective is to determine a plan Pthat selects subsets of point clouds, {S0(XI),···, SM(XI)}, inthe initial scene, XI, and transform them using {δ0,···, δM}, in the minimum number of steps, M,to reach the an achieved state XAwhose object specific point clouds are subsets of those given bythe complete target state YT. For application to robot rearrangement, we further define a low-levelplanner πL:Q 7→ A whereQandArefer to the configuration and action spaces for the robot. Everysteppi= (Si, δi)∈Pfrom the plan output by πHwill have a sequence of achievable configurationsqδiassociated with it that the low-level planner will take and further produce a sequence of actionsA∈ A to achieve the intermediate state defined by the application of pion the previous state. Forconvenience and brevity, we will denote some more notation for the remainder of this paper. Letany arbitrary set u, where |u|=n, be denoted as u{n}. Also, for any arbitrary set U, we denote itsassociation with a scene ιby specifying a subscript as Uιand a subset of Uassociated some objector characteristic ωas a superscript Uω⊆U. And, for any graphs, superscripts denote differentgraph instances: GT, GZ, GD.4 MethodThis section details our approach for multi-level rearrangement planning to generate a multi-stepplan, the execution of which results in achieving the unseen structured target scene. Figure 2 showsan overview of the model architecture and the basic flow of the approach. Additional model andimplementation details are available in Section 7.2 of the Appendix.3Point Cloud and Feature Extraction. Partial point clouds are generated from the target scene andinitial scene images. First, we obtain the RGB-D images from multiple viewpoint cameras sur-rounding the scene area and calculate their corresponding point clouds in the world frame from thecamera’s known world frame positions. We do preliminary filtering to remove outlier points. This isdone for both the target scene and the initial scene to obtain XTandXI. A trained PointNet++[30]based segmentation network then takes the scene point cloud and returns the segmented identityvalues for each point. Next, we extract the object-specific point clouds to obtain XoTandXoIforeach object o. This makes up the Segmentation Module. As part of the Initial Graph Generation, weuse another PointNet++ based network to extract object-level latent features for each object pointcloud and concatenate them to get w{N}I andw{N}T which will act as part of the node features forthe scene graphs. Note that Nis the number of objects in each scene.Object Pose Alignment. Now is the task of object alignment and correspondence creation. Be-tween the target and initial scenes, we have multiple objects of the same type so we want to selectcorrespondences such that the orientation change for an object oiis minimized from the initial to thetarget scene. To that end, for each object XoiTin the target scene, we use its predicted yoiTidentityand sample its known mesh to obtain a default surface point cloud Xoid. These complete, Xoid, andthe partial point clouds, XoiT, from the target scene, are then used to predict a spatial transformationToiT← [d, using TEASER++ [31], which aligns XoidwithXoiT. This process is repeated for each objectpoint cloud XojIin the initial scene matching whose identity yojImatches oi’s identity yoiTto geta set of transformations {To0d← [I,To1d← [I,To2d← [I. . .}. These are then right multiply with ToiT← [dto ob-tain{To0T← [I,To1T← [I,To2T← [I. . .}. From these, we choose the one which minimizes the magnitude ofrotation from the initial to target scene (to avoid reorienting with multiple pick-and-place actions).This gives us the canonical correspondence (and associated transformations ToiT← [I, ToiT← [d) for oibetween the initial and target scene. This makes up the Pose Alignment Module. Finally, as partof the Initial Graph Generation, we obtain centroids for Xoid·(ToiT← [d)TandXoid·(ToiI← [d)Twhichare put into a positional encoder [32] to obtain positional features for each object in both the initialand target scenes: b{N}I, b{N}T. Together, the object level features and positional features give us thenode features n{N}I= [w{N}I||b{N}I], n{N}T= [w{N}T||b{N}T]for the objects in the scenes for theinitial graph, where ||denotes concatenation.Graph Node Encoder. The graph node encoder, gΦ:G 7→ Z is based on a graph attentionneural network [33, 34], known as GAT, which takes in an initially fully connected scene graphGT= (V, E)∈ G of the target scene with the aforementioned n{N}T– which contain both object-level features and positional features for each object in the target scene – serving as the initial nodefeatures. These initial node features are updated using a modified convolution that occurs for anynode iusing its neighbor set N(i). The exact node feature update done by the convolutional layeris given by n′i=αi,iΦni+Pj∈N(i)αi,jΦni, where Φis the learnable parameter for the updatemechanism and the attention coefficients α, which quantifies the importance of neighboring nodes,is given byαi,j=exp(aTLeakyReLU(Φ[ ni||nj]))Pk∈N(i)∪N(j)exp(aTLeakyReLU(Φ[ ni||nk]))(1)whereTrepresents transposition and ais the learnable parameter for the attention mechanism (forderivation and more details, please refer to [33, 34]). The output from the graph encoder is a latentscene graph GZ∈ Z for the target scene containing high-level features for each of the objects. Inour problem setting, a GAT-based model with multi-headed attentions being averaged outperformeda vanilla GCNs.MLP Edge Decoder. The MLP-based edge decoder hΨ:Z → D maps the latent scene graph GZtothe structural dependency graph GDfor the target scene containing inter-object specific dependencyinformation. Specifically, hΨcan be queried with a pair of high-level features zi, zjrepresentingobjects oi, ojand will decode them into the structural relationship between them. This relationshipof dependence is asymmetric, and the decoder is used to query every ordered pair of nodes in thegraph to obtain the respective dependence probabilities ρ{N×N}where ρi,j=h([zi||zj]; Ψ) . Given4ρ{N×N}, we construct an inferred adjacency matrix for the structural dependency graph GDof thetarget scene with values GDi,j=ρi,j> t∗where t∗is some threshold value. The directed edges forthis graph are visualized in Figure 2 as green arrows.Structured Plan Creation. Once we have a directed acyclic graph GDrepresenting the inferredstructural relationship between each pair of objects different objects in the target scene, we performa topological sorting to obtain a valid sequencing with which the objects can be introduced so asto construct the structure in the target scene. Next, we make use of the aforementioned objectcorrespondences between the initial and target scene to pick the object from its current position(formulated as a point cloud selection of S) in the initial scene and place it in the target positiongiven by the associated spatial transformation δ=Tobtained from pose alignment. All of thisprovides us with a plan P={(S0, δ0, k0), . . . , (SN, δN, kN)}where Nis the number of objects.For plan step pi= (Si, δi, ki)∈P,ki⩽N∈Ndenotes the dependence hierarchy identifier in thetarget structure that the object represented by the selection Sibelongs to. Particularly, the plan isagnostic to any inversions involving any plan steps pi, pj∈Psuch that their associated hierarchyidentifiers match: ki=kj. That is, objects belonging to the same class of dependence hierarchy canbe placed in any order relative to each other. This makes up the Structural Planner Module.Planning Algorithm. Algorithm 1 defines our planning algorithm, which yields a multi-step planfor performing efficient multi-level rearrangement. For given scenes, we extract the observable in-formation and convert them into scene point clouds XI, XT. These are taken and segmented into acollection of object point clouds X{N}I, X{N}T and their respective predictive identities y{N}I, y{N}T,where Nis the number of objects in our scenes (Lines 1-2). Using these, we calculate the corre-spondences for objects in the initial and target scene to produce reordered versions of the objectpoint cloud along with transformations T{N}T← [Ifor each object from the initial to target scene andtransformations T{N}T← [dfrom objects’ known default point cloud to their respective counterparts inthe target scene (Line 3). Next, we use the target object point clouds X{N}T with a PointNet++ [30]based model to extract object-level features w{N}T and the default point cloud transformed to thetarget pose with a positional encoder [32] to get higher level embeddings b{N}Tfor the target posi-tion (Lines 4-6). Together these make up and construct the initial graph GT, which is provided tothe graph encoder network gΦto get a graph with higher level node features z{N}for each objectGZ(Line 7). Using these, we do ordered pairwise queries to the edge decoder hΨto populate theprobabilities ρ{N×N}of the ithobject depending on the jthobject which are used along with athreshold t∗to determine the canonical structural dependency graph GDfor the target scene (Lines8-11). Given GD, we now do a preliminary check of the inferred dependence by seeing whetherthe dependence graph is directed acyclic because if this is not the case, a circular dependency ex-ists. Since all target scenes are known to be reachable with one manipulator, a cycle indicates aprediction error and we report failure (Lines 12-13). Finally, we do a topological sorting of thedependence graph GDand iterate over the result, using the predicted object transformations T{N}T← [Ifrom the initial scene to the target to create a rearrangement plan P(Lines 14-18). For each elementpi= (Si, δi, ki)∈P, the execution will pick the object represented by the set of points Si(XI)andexecute low level actions to result in an effective transformation of δi=Tion said points in an ordersuch that kiincreases monotonically.Figure 3: Examples of generated structures in dataset. For scale, the red cuboid is 3x3x6 cm3Data Generation and Training. To train our models, we generate synthetic data containing in-tuitive structures built from a set of 8 possible object primitives, which are captured from 3 fixedviewpoints. We restrict the set of initial orientations for each object to allow only those whichprovide a non-negligible surface area on the bottom surface to allow for stable placement (e.g., a5Algorithm 1: SCL-Planning (X)1XI, XT←PointCloudExtraction( X) ▷initial and target scene point-clouds (pcds)2X{N}I, X{N}T, y{N}I, y{N}T←Segmentation( XI, XT)▷object pcds and their identities3X{N}I,T,d,T{N}T← [I,T{N}T← [d←PoseAlignment( X{N}I, X{N}T, y{N}I, y{N}T)▷align and correspond4w{N}T←ObjectFeatures( X{N}T)5b{N}T←PositionalFeatures( X{N}d,T{N}T← [d)6n{N}T←NodeFeatures( w{N}T, b{N}T) ▷node features for initial graph7GZ= (z{N}, E)←gΦ(GT= (n{N}T, E)) ▷graph with higher level node features8ρ{N×N}← ∅9for(i, j)∈Edo10 ρ{N×N}i,j =hΨ([z{N}i||z{N}j]) ▷edge decoding for each pairwise edge11GD= (N, ρ{N}> t∗) ▷creation of dependency graph based on threshold12ifIsNotDAG( GD)then13 return Circular Dependency Failure ▷detecting unrecoverable prediction error14P{N}← ∅15fori, ki∈TopologicalSorting( GD)do16 Si, δi←RearrangmentStep( i,T{N}T← [I)▷ ithobject selection operator and transformation17 P←P∪ {(Si, δi, ki)} ▷adding step ito plan18return Pcylinder is not allowed to be placed in a way where it may roll). And, for all but the final layer,only object orientations that provide a stable surface of non-negligible surface area will be selectedfor placements (e.g., a pyramid will not be placed upright but may be placed sideways). For the1stlayer, objects are initially placed randomly within the bounds of the target scene, ensuring nocollision. Following this, a 2 step process is repeated for each ithlayer. First, attempt to placeobjects supported by 2 objects in the previous layer. For this, we consider each pair of objects in theprevious layer with available surface area and attempt to place an object in some valid orientationgiven the distance between the supporting objects. This is done by sampling some points on the topsurface of the supporting objects and fitting a plane to them, which the initial valid orientation isprojected onto to get the object pose. Note that this allows for random placements that are not con-strained to be axis aligned. Once all such supporting pairs in the previous layer have been exhausted,we move on to the second phase. In the second phase, we repeat the process but attempt to placeobjects on top of just one object from the previous layer. This is repeated until the step budget isexhausted. In simulation, we calculate the ground truth dependency graph using the y-componentsof the contact force vectors between each pair of objects. The graph encoder gΦand edge decoderhΨwere trained together in a supervised manner to minimize the binary cross entropy loss betweenthe adjacency matrices of the predicted and ground truth dependency graphs. Some examples ofgenerated structures are given in Figure 3. We use PyByllet [35] for the simulation environment anduse Trimesh [36] for collision and geometric checking. More information, including the generationalgorithm’s pseudo-code is provided in Section 7.1 of the Appendix.5 ResultsWe perform four sets of experiments. First, we tested our method on unseen structures in the sim-ulated environment and compared it with some model-based and classical baselines. Second, weshow how our method generalizes to multi-level object rearrangement tasks with structures contain-ing a higher number of objects. Third, we show our method’s generalization to multi-level objectrearrangement tasks with unseen structures consisting of higher number of levels than were in thetraining set. Finally, we demonstrate our method’s sim-to-real generalization on multi-level objectrearrangement tasks in the real world. The real world experiments also show our method gener-alizing to and operating on structures with objects placed in locations that cross between different6PlannerPerformance MetricsSuccess (%)↑Completion (%)↑ Steps↓ Pos Error (m) ↓Orn Error (0-1) ↓SCL (Ours) 95.1 98 .7 1 .0±0.0 0 .002±0.003 0 .0024±0.0028MLP 76.2 81 .7 1 .0±0.0 0 .002±0.004 0 .0024±0.0031Classical Random 90.1 98 .0 1 .48±0.22 0 .002±0.004 0 .0029±0.0033Classical Iterative 45.8 81 .8 1 .56±0.25 0 .002±0.004 0 .0021±0.0025Table 1: Comparison between our approach with classical and model-based baselines. Our approachshows better-performing plans with fewer steps. Classic baselines were given a planning budget of2Nand take, on average, around 50% more steps for success than our approach, as indicated by thestep factor.levels – something also not present in the training data. Also note that the errors and steps were onlycalculated for successful cases.Evaluation Metrics. We use the following metrics for quantitative comparisons of different tasks.Success Rate: The percentage of successfully solved unseen scenes where success is defined as noobject’s achieved pose differing from the target position by more than 1 cm or the target orienta-tion by 0.03 (normalized quaternion distance). Completion Rate: The percentage of objects whoseachieved pose was within the success threshold (defined above) of their target in the scene, averagedover all scenes. Planning Steps: The ratio of the planning steps required to rearrange the objectsfrom the initial scene to the target scene and the number of objects in the scene, averaged overall scenes. Position Error: The mean Euclidean distance between an object’s achieved and targetposition for all objects in the scene, averaged over all scenes. Orientation Error: The mean quater-nion distance φ(q1, q2) = min {||q1+q2||2,||q1−q2||2}[37], normalized to be between 0 and 1,between the quaternions representing the achieved and the target orientations for all objects in thescene, averaged over all scenes.Baselines. The following baseline planners take over once the object correspondences between thetarget and initial scenes have been calculated. Classical Iterative Baseline: This is a model-freeplanner which iteratively selects objects from the initial scene that are not yet in their target pose.Once it has a candidate selected, it checks whether they can be stably placed in their target posewithout falling by attempting a set of ray checks on the target location. If the check fails, it continuesthe iteration. Once it finds a valid object for placement, it places the object and restarts its iteration.Classical Random Baseline: This is a model-free planner which randomly selects objects from theinitial scene that are not yet in their target pose. Once it selects a candidate, it does a stability checksimilarly to the other classical planner. If the check fails, it removes said object from its selectionpool and continues random selection. If the check succeeds, it moves the object to its target poseand resets its selection pool to the current set of objects not in its target pose. MLP Baseline: This isan MLP-based object selection network that was trained on node features [n{N}i||n{N}T]obtainedfrom the scene after the execution of step pifrom a ground truth plan with the goal of having theselection network learn and output the correct object selection operation Mi∈pifor planning. Nwas set to a maximum value of 10 and node features were padded with zeros if the scene had fewerobjects.Comparison Analysis. Table 1 shows our comparison results with the aforementioned baselines,evaluated on upwards of 400 scenes of unseen structures. Our approach outperforms all baselines interms of success and completion rate. The MLP baseline had a tendency to get stuck in a local mini-mum for which object to move next, causing it to sometimes not be able to attempt rearrangement ofthe remaining objects, resulting in a lower completion rate. Classical baselines were given a budgetof2Nsteps (where Nis the number of objects in a scene) and, on average, take around 50% moresteps for success. Our approach generalizes to multiple objects, unlike MLP, and provides betterperformance in fewer steps compared to the classical approaches.Scalability Analysis. Table 2 shows our method’s ability to generalize to scenes with a variablenumber of objects, evaluated over more than 400 unseen scenes of structures with at most 3 layers.Our approach performs very well in scenes with less than 15 objects despite having been trainedusing a set containing an average of 9 objects. While a higher number of objects in a scene cause7Number of Objects ( N)Performance MetricsSuccess (%)↑Completion (%)↑Position Error (m) ↓Orientation Error (0-1) ↓8⩽N⩽10 95 .1 98 .7 0 .002±0.003 0 .0021±0.002810⩽N < 15 93 .9 98 .2 0 .002±0.003 0 .0024±0.003615⩽N < 20 81 .1 97 .9 0 .003±0.004 0 .0028±0.004320⩽N < 25 73 .7 96 .7 0 .003±0.004 0 .0031±0.0054Table 2: Performance of our approach as the number of objects in the scene increases. Largernumber of objects in the same space results in occlusion, causing lower success rates but with thecompletion rates remaining high.occlusion resulting in poorer prediction, our algorithm can still recover and deliver high completionrates even when success rates reduce. Table 3 shows the performance of our approach on over 1000scenes containing meaningful structures with a variety of levels despite having been trained only onscenes containing at most 3 levels, showing its ability to generalize to novel and unseen structureconfigurations. On structures with less than 3 levels, it has a 100% success rate. While occlusiondue to denser structures causes a lower success rate as the number of levels increases, our algorithmstill maintains a high completion rate.Number of LevelsPerformance MetricsSuccess (%)↑Completion (%)↑Position Error (m) ↓Orientation Error (0-1) ↓1 100.0 100 .0 0 .001±0.000 0 .0011±0.00042 100.0 100 .0 0 .002±0.003 0 .0025±0.00383 94.5 98 .6 0 .002±0.003 0 .0036±0.00454 88.0 98 .4 0 .002±0.003 0 .0036±0.00385 79.8 97 .0 0 .002±0.003 0 .0038±0.0039Table 3: Generalization of our approach to structures with higher levels. Meaningful structureswith higher levels again cause occlusion, and we see a similar trend of lower success rates but goodcompletion rates. Note that our method only trained on structures containing at most 3 levels.Sim2Real Generalization. We performed a set of real-world experiments using a UR5e robot with asuction gripper and three Intel RealSense cameras for scene observation. We set up target structuresranging from 8 to 16 blocks with variable levels within the structure and block locations that crossbetween levels (example shown in Figure 1). Despite being trained in simulation with structureslimited to 3 levels and discrete level layers, our method generalizes well to novel problem settingsin the real world. The demonstration videos are provided on the project web-page.6 Conclusions, Limitations, and Future WorksWe presented Structured Concept Learning (SCL), a graph attention network-based approach formulti-level rearrangement planning. SCL is an end-to-end approach which infers the dependency ofobjects and substructures to build multi-level structures from point cloud sets of the initial and targetscene from RGB-D cameras. Our approach was trained on a novel data set gathered by our intuitivemulti-level structure generation procedure. In addition to demonstrating its sim-to-real generaliza-tion, we evaluate our approach on challenging problems defined by target scenes containing differentunseen structures with a variety of objects and level hierarchies. As for limitations, very dense struc-tures can cause occlusion which results in very incomplete point clouds leading to faulty inferreddependence or incorrect spatial transformations for movement. The current approach also requiresmulti-view perception and lacks feedback control to account for execution error due to hardware lim-itations such as forward momentum from the suction gripper. For future work, we aim to augmentSCL with robust segmentation and point-cloud shape completion to reduce incorrect predictions.Another future objective is to extend SCL with multi-robot task allocation to use its existing abilityof finding independent substructures to control multiple robots for faster execution. More avenuesfor exploration would include analysis of various graph networks and their application to structuralplanning and augmenting the dataset with objects having more diverse dimensions, leading to evenmore interesting structures. Lastly, we would like to extend SCL to more general multi-robot tasksand incorporate multi-agent specific considerations allowing for improved complex execution to ac-complish more demanding target states, including non-monotone cases and those requiring in-placemanipulation.8References[1] D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V . Koltun, S. Levine, J. Malik,I. Mordatch, R. Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv preprintarXiv:2011.01975 , 2020.[2] A. H. Qureshi, A. Mousavian, C. Paxton, M. C. Yip, and D. Fox. Nerp: Neural rearrangementplanning for unknown objects. Robotics: Science and Systems (RSS) , 2021.[3] H. Tian, C. Song, C. Wang, X. Zhang, and J. Pan. Sampling-based planning for retrievingnear-cylindrical objects in cluttered scenes using hierarchical graphs. IEEE Transactions onRobotics , 39(1):165–182, 2023. doi:10.1109/TRO.2022.3191596.[4] B. Tang and G. S. Sukhatme. Selective object rearrangement in clutter. In Conference on RobotLearning , pages 1001–1010. PMLR, 2023.[5] Y . Huang, A. Conkey, and T. Hermans. Planning for multi-object manipulation with graphneural network relational classifiers. IEEE Conference on Robotics and Automation (ICRA) ,2023.[6] A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin,D. Duong, V . Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world forrobotic manipulation. Conference on Robot Learning (CoRL) , 2020.[7] H. Wu, J. Ye, X. Meng, C. Paxton, and G. Chirikjian. Transporters with visual foresight forsolving unseen rearrangement tasks. IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , 2022.[8] Y . Zhu, J. Tremblay, S. Birchfield, and Y . Zhu. Hierarchical planning for long-horizon manip-ulation with geometric and symbolic scene graphs. International Conference on Robotics andAutomation (ICRA) , 2021.[9] A. Murali, A. Mousavian, C. Eppner, A. Fishman, and D. Fox. Cabinet: Scaling neural col-lision detection for object rearrangement with procedural scene generation. arXiv preprintarXiv:2304.09302 , 2023.[10] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kaelbling, and T. Lozano-P ́erez.Integrated task and motion planning. Annual review of control, robotics, and autonomoussystems , 4:265–293, 2021.[11] C. R. Garrett, T. Lozano-P ́erez, and L. P. Kaelbling. Pddlstream: Integrating symbolic plannersand blackbox samplers via optimistic adaptive planning. In Proceedings of the InternationalConference on Automated Planning and Scheduling , volume 30, pages 440–448, 2020.[12] S. H. Cheong, B. Y . Cho, J. Lee, C. Kim, and C. Nam. Where to relocate?: Object rear-rangement inside cluttered and confined environments for robotic manipulation. In 2020 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 7791–7797. IEEE, 2020.[13] R. Wang, K. Gao, D. Nakhimovich, J. Yu, and K. E. Bekris. Uniform object rearrangement:From complete monotone primitives to efficient non-monotone informed search. In 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 6621–6627. IEEE, 2021.[14] K. Gao, D. Lau, B. Huang, K. E. Bekris, and J. Yu. Fast high-quality tabletop rearrangementin bounded workspace. International Conference on Robotics and Automation (ICRA) , 2021.[15] H. Song, J. A. Haustein, W. Yuan, K. Hang, M. Y . Wang, D. Kragic, and J. A. Stork. Multi-object rearrangement with monte carlo tree search: A case study on planar nonprehensile sort-ing. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 9433–9440. IEEE, 2020.9[16] Y . Labb ́e, S. Zagoruyko, I. Kalevatykh, I. Laptev, J. Carpentier, M. Aubry, and J. Sivic. Monte-carlo tree search for efficient visually guided rearrangement planning. IEEE Robotics andAutomation Letters , 5(2):3715–3722, 2020.[17] J. Lee, C. Nam, J. Park, and C. Kim. Tree search-based task and motion planning with pre-hensile and non-prehensile manipulation for obstacle rearrangement in clutter. In 2021 IEEEInternational Conference on Robotics and Automation (ICRA) , pages 8516–8522, 2021. doi:10.1109/ICRA48506.2021.9561895.[18] M. Danielczuk, A. Mousavian, C. Eppner, and D. Fox. Object rearrangement using learned im-plicit collision functions. In 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 6010–6017. IEEE, 2021.[19] W. Goodwin, S. Vaze, I. Havoutis, and I. Posner. Semantically grounded object matchingfor robust robotic scene rearrangement. In 2022 International Conference on Robotics andAutomation (ICRA) , pages 11138–11144. IEEE, 2022.[20] L. Weihs, M. Deitke, A. Kembhavi, and R. Mottaghi. Visual room rearrangement. In Proceed-ings of the IEEE/CVF conference on computer vision and pattern recognition , pages 5922–5931, 2021.[21] A. Goyal, A. Mousavian, C. Paxton, Y .-W. Chao, B. Okorn, J. Deng, and D. Fox. Ifor: Iter-ative flow minimization for robotic object rearrangement. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 14787–14797, 2022.[22] M. Wu, F. Zhong, Y . Xia, and H. Dong. Targf: Learning target gradient field to rearrange ob-jects without explicit goal specification. Advances in Neural Information Processing Systems ,35:31986–31999, 2022.[23] Q. A. Wei, S. Ding, J. J. Park, R. Sajnani, A. Poulenard, S. Sridhar, and L. Guibas. Lego-net: Learning regular rearrangements of objects in rooms. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 19037–19047, 2023.[24] I. Kapelyukh, V . V osylius, and E. Johns. DALL-e-bot: Introducing web-scale diffusion modelsto robotics. IEEE Robotics and Automation Letters , 8(7):3956–3963, jul 2023. doi:10.1109/lra.2023.3272516. URL https://doi.org/10.1109%2Flra.2023.3272516 .[25] W. Liu, C. Paxton, T. Hermans, and D. Fox. Structformer: Learning spatial structure forlanguage-guided semantic rearrangement of novel objects. In 2022 International Conferenceon Robotics and Automation (ICRA) , pages 6322–6329. IEEE, 2022.[26] E. Stengel-Eskin, A. Hundt, Z. He, A. Murali, N. Gopalan, M. Gombolay, and G. Hager.Guiding multi-step rearrangement tasks with natural language instructions. In A. Faust, D. Hsu,and G. Neumann, editors, Proceedings of the 5th Conference on Robot Learning , volume 164ofProceedings of Machine Learning Research , pages 1486–1501. PMLR, 08–11 Nov 2022.URL https://proceedings.mlr.press/v164/stengel-eskin22a.html .[27] S. K. S. Ghasemipour, D. Freeman, B. David, S. S. Gu, S. Kataoka, and I. Mordatch. Blocks as-semble! learning to assemble with large-scale structured reinforcement learning. InternationalConference on Machine Learning (ICML) , 2022.[28] M. Yu, L. Shao, Z. Chen, T. Wu, Q. Fan, K. Mo, and H. Dong. Roboassembly: Learning gener-alizable furniture assembly policy in a novel multi-robot contact-rich simulation environment.arXiv preprint arXiv:2112.10143 , 2021.[29] V . N. Hartmann, A. Orthey, D. Driess, O. S. Oguz, and M. Toussaint. Long-horizon multi-robotrearrangement planning for construction assembly. IEEE Transactions on Robotics , 2023.10[30] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. Pointnet++: Deep hierarchical feature learning onpoint sets in a metric space. Conference on Neural Information Processing Systems (NIPS) ,2017.[31] H. Yang, J. Shi, and L. Carlone. Teaser: Fast and certifiable point cloud registration. IEEETransactions on Robotics (T-RO) , 2020.[32] J. Ortiz, A. Clegg, J. Dong, E. Sucar, D. Novotny, M. Zollhoefer, and M. Mukadam. isdf:Real-time neural signed distance fields for robot perception. Robotics: Science and Systems(RSS) , 2022.[33] P. Veli ˇckovi ́c, G. Cucurull, A. Casanova, A. Romero, P. Li `o, and Y . Bengio. Graph attentionnetworks. International Conference on Learning Representations (ICLR) , 2018.[34] S. Brody, U. Alon, and E. Yahav. How attentive are graph attention networks? InternationalConference on Learning Representations (ICLR) , 2022.[35] E. Coumans, Y . Bai, and J. Hsu. Pybullet, 2015. URL https://pybullet.org/ .[36] Dawson-Haggerty et al. trimesh, 2019. URL https://trimsh.org/ .[37] D. Q. Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of MathematicalImaging and Vision , 35:155–164, 2009.[38] M. Fey and J. E. Lenssen. Fast graph representation learning with pytorch geometric. Interna-tional Conference on Learning Representations (ICLR) , 2019.[39] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison,L. Antiga, and A. Lerer. Automatic differentiation in pytorch. Conference on Neural Informa-tion Processing Systems (NIPS) , 2017.117 Appendix7.1 Data GenerationThe basics of data generation are 8 possible object primitives to construct structures. All objectsmay be present in the top layer in any valid orientation, but middle (or supporting) layers may onlycontain certain objects in certain orientations conducive to stable structures. For example, an uprightpyramid would only provide a top surface of a single line and a cylinder that is not upright would belikely to roll and collapse the whole structure so these are not allowed. Overall, we initially generated8000 scenes for training and 2000 scenes for evaluation. Our generator was also used to createupwards of 10000 on-the-fly target structures for the evaluation of our pipeline. A rough pseudocode for the structure generation algorithm is specified in Algorithm 2, but the generation heuristicwill be open-sourced with the final manuscript. Note that there are some implementation detailsregarding the algorithm not specified here, like how the placement criteria has a metric involving thearea of the top surface of objects, placement pose of objects is found by fitting a plane to sampledpoints from the top surface of the supporting objects in the lower level, validation of orientation forplacement in layers is an exhaustive search, etc.7.2 Model Architecture Details and TrainingAll graph neural networks were implemented using PyTorch Geometric (PyG) [38], and all conven-tional neural networks were implemented using PyTorch [39].7.2.1 Positional EncodingWe utilized the positional encoding implementation specified in [32]. Specifically, the positionalencodings boTfor some object instance oin the target scene are given byboT=⟨sin(20Axo),cos(20Axo), . . . , sin(2LAxo),cos(2LAxo)⟩ (2)where xois the 3D position vector, the position associated with o(which we get by calculatingthe centroid of the point cloud obtained from sampling the known default position, transformed tothe target orientation), and the rows of Aare the outwards facing unit-norm vertices of a twice-tessellated icosahedron. We use no offset, a scale of 1, a min degree of 0, and a max degree of 5 tocalculate L, which results in an encoding of size 511. For more details, please refer to [32].7.2.2 PoinNet++ based segmentationWe used PyTorch Geometric’s [38] example model for segmentation, as is, without significantchanges. The input point clouds from each scene were downsampled to have 1024 points usingrandom sampling, and training was done in a supervised manner using negative log-likelihood losscalculated from the output and the ground truth point identities extracted from the simulation. Wetrained on a set of 8000 scenes and validated performance on 2000 scenes before use.7.2.3 PointNet++ based feature extractionThis utilized PyTorch Geometric’s [38] implementation of PointNet++ with an MLP attached at theend to do classification on our set of 8 object primitives. Our MLP used had 3 layers. The first layerhad an input size of 1024 and an output size of 512, the second layer had an input size of 512 and anoutput size of 256, and the final layer had an input size of 256 and an output size of 8. To obtain theobject level features w{N}Tfor each object in the target scene, we remove the final layer and take the256-sized output to use as the object’s latent features. The network was trained using a classificationtask for objects in over 800 scenes (each containing an average of 9 objects) and evaluated on objectsin over 200 scenes before use.7.2.4 GAT Graph EncoderOur graph encoder gΦcontains 2 graph attention convolution layers, each of which convolves aroundevery node iin the graph using its neighbor set N(i). The exact node feature update done by the12Algorithm 2: Generation-Algorithm( B, L{K},O).Bare the bounds for the scene, L{K}specifies the maximum number of objects on each level, Kspecifies the number of levels, andOare all possible object instances that can be placed.1OS←ValidSubLevelObjects( O)2c0←0 ▷number of objects placed on level 03O← ∅ ▷objects placed4A{K}← ∅ ▷ Aicontains all objects available for placement in level i5while c0< L 0do6 o←RandomlyPick( OS)7 ifo∈Band NotInCollision( o, O)then8 PlaceObject( o)9 O←O∪ {o}10 A0←A0∪ {o}11 c0←c0+ 112c{N}←0 ▷ ciis the number of objects placed on level i13i←114while i⩽Kdo15 for(a, b)∈Ai−1do16 v←ValidPlacementObjects( a, b, O, O)▷obj instances that can be supported bya, b17 v←CollisionFree( v, O) ▷filters to give only collision free placements18 foro∈vdo19 PlaceOnObjects( o, a, b ) ▷places oto be supported by a, b20 Ai←Ai∪ {o}21 O←O∪ {o}22 ci←ci+ 123 ifci< Lithen24 break25 ifci< Lithen26 break27 fora∈Ai−1do28 v←ValidPlacementObjects( a, O,O)▷object instances that can be supported by a29 v←CollisionFree( v, O) ▷filters to give only collision free placements30 foro∈vdo31 PlaceObject( o, a) ▷places oto be supported by a32 Ai←Ai∪ {o}33 O←O∪ {o}34 ci←ci+ 135 ifci< Lithen36 break37 ifci< Lithen38 break39 i←i+ 140return Oconvolutional layer is given byn′i=αi,iΦni+Xj∈N(i)αi,jΦni (3)where Φis the learnable parameter for the update mechanism and the attention coefficients α, whichquantifies the importance of neighboring nodes, is given byαi,j=exp(aTσ(Φ[ni||nj]))Pk∈N(i)∪N(j)exp(aTσ(Φ[ni||nk]))(4)13where σis the non-linear activation LeakyReLU with a slope parameter of 0.2. Each of the graphconvolution layers had 16 attention heads with averaging used as the aggregation function. The firstgraph attention layer has an input size of 511 and an output size of 256 whereas the other graphattention layer has an input size of 256 and an output size of 128. Training was done in a supervisedmanner with7.2.5 MLP Edge DecoderThe edge decoder hΨcan be queried with a pair of high-level features zi, zj, resulting from the graphnetwork encoder, and will decode them into the structural relationship between them. This relation-ship of dependence is asymmetric, and the decoder is used to query every ordered pair of nodes inthe graph to obtain the respective dependence probabilities ρ{N×N}where ρi,j=h([zi||zj]; Ψ) .hΨhas 2 fully connected layers and uses LeakyReLU as its non-linear activation function after the firstlayer only. The first layer has an input size of 2·128 = 256 and an output size of 128, whereasthe second layer has an input size of 128 and an output size of 1. We apply SoftMax to get the as-sociated probabilities for training with binary cross-entropy loss. The reason for the input layer forhΨbeing twice the output size for gΦis because finding the existence probability for the edge (i, j)involves concatenating the high-level node features zi, zjbefore inputting them into the edge de-coder. The graph encoder gΦand edge decoder hΨwere trained together in a supervised manner tominimize the binary cross entropy loss between the adjacency matrices of the predicted and groundtruth dependency graphs, which, in turn, was obtained from simulation information (specifically, they-components of the contact force vectors on each pair of objects).Figure 4: Our approach performing progressive pick-and-place (with all steps shown) actions basedon its multi-level rearrangement plan to achieve (top left) a target arrangement (middle left).14 |
PalhNjBJqv | A Data-efficient Neural ODE Framework for OptimalControl of Soft ManipulatorsMohammadreza KasaeiSchool of InformaticsUniversity of Edinburgh, UKm.kasaei@ed.ac.ukKeyhan Kouhkiloui BabarahmatiSchool of InformaticsUniversity of Edinburgh, UKkkouhkil@ed.ac.ukZhibin LiDepartment of Computer ScienceUniversity College London, UKalex.li@ucl.ac.ukMohsen KhademSchool of InformaticsUniversity of Edinburgh, UKmohsen.khadem@ed.ac.ukAbstract: This paper introduces a novel approach for modeling continuous for-wardkinematicmodelsofsoftcontinuumrobotsbyemployingAugmentedNeuralODE (ANODE), a cutting-edge family of deep neural network models. To thebest of our knowledge, this is the first application of ANODE in modeling softcontinuum robots. This formulation introduces auxiliary dimensions, allowingthe system’s states to evolve in the augmented space which provides a richerset of dynamics that the model can learn, increasing the flexibility and accuracyof the model. Our methodology achieves exceptional sample efficiency, train-ing the continuous forward kinematic model using only 25 scattered data points.Additionally, we design and implement a fully parallel Model Predictive Path In-tegral (MPPI)-based controller running on a GPU, which efficiently manages anon-convex objective function. Through a set of experiments, we showed that theproposed framework (ANODE+MPPI) significantly outperforms state-of-the-artlearning based methods such as FNN and RNN in unseen-before scenarios andmarginally outperforms them in seen-before scenarios.Keywords: Soft robots, Non-parametric modelling, Optimal control1 IntroductionSoft robots are composed of compliant materials such as silicone, rubber, or elastomers, whichallows them to conform to surfaces and objects while maintaining a level of physical robustnessunavailabletotheirrigidcounterparts. Thislevelofcompliancemakesthemsuitableforabroadrangeof applications, including medical procedures, search and rescue operations, and exploration [1].However, the design and control of soft continuum robots present significant challenges. Thisis due to the inherent non-linearities and high DOF required to accurately capture the structuraldeformations that realize these compliant behaviours, which in turn makes it challenging to controlthe robot’s motion. Several methodologies have been proposed to model soft robot controllers,broadly classified into two categories: model-based and data-driven [2].Model-based approaches rely on mathematical models to represent the dynamics of the robot anduse this model to design controllers. Various methodologies have been proposed in this category,including polynomial curvature fitting [3], reduced-order finite element models [4], and lumpedparameter models [5, 6]. A comprehensive review of physics-based models for soft robots can befound in [7]. These models have limitations, as they are based on assumptions that may not hold inall conditions and may only be able to accurately describe the behaviour of the robot under a subsetof conditions. Furthermore, these methods can be computationally expensive and may not fullycapture the nonlinear behaviour exhibited by the robot. Additionally, physics-based models’ errortends to increase together with the model inaccuracies, e.g., fabrication inaccuracies.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: 25 scattered data points are employed within the demonstration space for acquiringknowledge about a continuous forward kinematic model (left). Subsequently, this trained modelserves as the foundation for a fully parallel controller capable of managing a non-convex objectivefunction executed on a GPU (right).Learning-based approaches for soft robot modelling and control utilize collected data to developmodels and controllers, either with or without relying on mathematical models. The learning-basedmodels tend to be insensitive to physics-based modelling assumptions and fabrication inaccuracies,especially if training is accomplished on the physical robot arm. These approaches aim to learnkinematic or dynamics models directly [8, 9, 10, 11, 12] or employ model-based/model-free Rein-forcementLearning(RL)techniquestolearncontrolpolicies[13,14]. Whiledata-drivenapproacheshavethepotentialtoovercomethelimitationsofmodel-basedmethods,theyalsofaceinherentdraw-backs. First, they require a substantial amount of training data, which can be challenging to acquireforrealrobotsandmaycompromisetheirstructuralintegrityasdemonstratedin[15],[16]and[17],where 12000, 7000, and 4096 sample points were utilized. Second, the ability to generalize knowl-edge is limited, leading to poor performance in new or unseen situations [18]. Lastly, data-drivenapproaches lack interpretability and physical insight, making it challenging to understand decision-making processes and the underlying physical principles of soft robot behaviour [13].In this paper, we introduce a novel framework for modeling and controlling soft robots. We addressthelimitationsofexistingdata-drivenmethodsandproposeasolutionthatisextensivelytestedonareal soft robot in different scenarios. Our key contributions are as follows:1. We propose a novel approach to developing continuous forward kinematic models for soft con-tinuum robots by employing Augmented Neural ODE (ANODE) [19], a state-of-the-art family ofdeep neural network models. To the best of our knowledge, this is the first time that ANODE hasbeen applied to modelling soft continuum robots. Our methodology achieves exceptional sampleefficiency, training the continuous forward kinematic model using 25 real robot demonstrations.2. Utilizing the trained model as a basis, we developed a fully parallel Model Predictive Path Inte-gral (MPPI) controller running on a GPU, capable of efficiently managing a non-convex objectivefunction. Thiscontrollerharnessesthepowerofparallelprocessingtooptimizetrajectoryplanningand control for the soft continuum robot. By leveraging the capabilities of the trained model, ourcontroller enables the robot to navigate previously unseen trajectories within the feasible kine-maticspace. Incomparisontostate-of-the-artdata-drivenapproachessuchasfeed-forwardneuralnetwork (FNN) and recurrent neural network (RNN), our framework (FK model and controllercombination)demonstratessuperiorperformanceintermsoftrajectorytrackingerrorwhiletrainedon significantly fewer data.Thispaperisorganizedasfollows: themethodologyoftheproposedapproachisoutlinedinSection2.To evaluate the effectiveness of the proposed framework, a series of experiments are conducted inSection3,andtheresultsarediscussed. AnablationandcomparisonstudywithexistingapproacheswillbepresentedinSection4. Finally,theconclusionandfutureworkaresummarizedinSection5.2 MethodologyThis section will provide a comprehensive overview of the key steps involved in our approach. Wewill start by discussing the process of generating the training dataset, explaining how we collect thenecessary data to train our model. Next, we will focus on learning the differential kinematics ofthe robot, highlighting the techniques and methodologies employed in this process. Finally, we will2describe the development of a controller based on the learned model, explaining how it enables therobot to execute precise and controlled movements.2.1 Training DatasetTherobotutilizedinthisstudyisamulti-backbonerobot,asdepictedinFigure1(left). Itconsistsofacentralflexiblebackbonemadeofacompressionspringandfourparallelflexiblerodsencasedaroundit. By manipulating the rods (pulling/pushing), the shape of the robot can be altered. More detailsabouttherobotprototypearepresentedintheappendix. Themodelingandcontrollingofsuchrobotspresent significant challenges, including the complexity of the coupling between actuation inputs,difficulties in modeling friction between the rods and spacers, particularly when the robot is bent,and the challenge of determining model parameters and the backbone’s mechanical characteristics.Inthispaper,weintendtoutilizealimiteddatasettomodelthebehavioroftherobot. Thedatasetwasgeneratedbyanexperiencedoperatorwhoperformedaseriesofdemonstrationsusingtherobot. Theoperatormanipulatedthelengthsoftherodstomovethetipinvariousdirections. Thedemonstrationswere conducted with the rods being pulled/pushed up to 3mm. The feasible kinematic space anddemonstrationspacearedepictedinFigure1. Asitisshown,thedemonstrationspaceissignificantlysmallercomparedtothefeasiblekinematicspace. Therobotinputs, ut∈R3,andthecorrespondingCartesian coordinates of the robot tip, xt∈R3, were recorded at a rate of 15 Hz to generate thetrainingdataset,i.e., D={xkt,ukt}Nk=1. ThepositionoftherobotwasestimatedusinganRGBcamera,as discussed in Section 3. It is noteworthy that generating this dataset was efficient and took lessthan10minutes,andourdatasetcontainsN=9100samples. Only25pointsrandomlyselectedfromthisdatasetwillbeusedtomodeltherobotinthenextsection,whiletheremainingdatawasutilizedto train other state-of-the-art machine learning algorithms for comparison.2.2 Learning Differential Kinematics of the RobotWe assume that the robot’s behavior can be modeled by a series of nonlinear differential equations:¤x(t)=f(x(t),u(t)),f:R3×R3→R3(1)with the initial conditions, x(t0)=x(0),u(t0)=u(0).It is assumed that a closed-form expressionfor the function fdoes not exist. One can use two consecutive states and an action to train amultilayer perceptron (MLP) ({ xt,ut}→xt+1) or a Recurrent neural network (RNN) like nonlinearauto-regressive network with exogenous inputs (NARX). MLPs and NARX models work well ontherangeoftrainingdatabuttheymaystrugglewithextrapolation,especiallyifthedataliesoutsidethe range of the training data. Additionally, MLPs and NARX models, which typically operate ina discrete-time fashion, may produce predictions that exhibit discontinuities between time steps,leading to less smooth or less physically plausible extrapolations [20].To overcome these limitations, and to develop a continuous and smooth data-efficient neural net-work to approximate the robot’s model, we formulated the problem as an Augmented NeuralODE (ANODE) [19] which can naturally extend predictions beyond the observed data and timespan [20]. Indeed, the robot is modeled using stiff differential equations which is characterized byhavingsolutionswithrapidlychangingcomponentsaswellasslowlychangingcomponents. ExplicitmethodslikeEuler(ResNet)[21]willbeunstableunlessthestepsizeistakentobeextremelysmall.Therearesometrickstoovercometheseissuesbuttheydonotfundamentallychangetheunderlyingstabilityproperties. ThekeyfeaturesoftheANODEincomparisontotheotherapproachesaredataefficiency, capturing complex dynamics, continuous-time formulation, generalization, and handlingoftrajectoryintersections[19]. Inthiswork,weusedfixed-adams(Adams-Bashforth-Moulton)[22]method which is an implicit method that is more stable and accurate for stiff differential equations,as they take into account not only the current and previous steps, but also the future step that we aretryingtocompute. Thisallowsthemtohandletherapidlychangingcomponentsofthesolutioninamorerobustway. Additionally,topreventtrajectoriesfromintersecting,weexpandthelearningandsolutionspaceoftheODEfrom R3×R3→R3toR3×R3+p→R3+p. Byconcatenatingavectorof3zeros(0p×1)toeachdatapoint,wesolvetheODEinthisaugmentedspace. Thisaugmentationleadstoasmootherlearnedfunction f,resultinginsimplerflowsthatcanbecomputedbytheODEsolverinfewersteps. Additionally,itallowstheODEflowtoliftpointsintotheextradimensions,effectivelypreventing trajectory intersections. This continuous-time formulation allows the model to capturetheunderlyingdynamicsandmakereasonableextrapolationsbasedonthelearnedODEs[19]. Withtheobjectiveinmind,weproceedtodiscretizetherobotmodeldescribedin(1)andtransformitintoa boundary value problem:x+=fθ(x(t),u(t)), (2)based on the following boundary conditions:x0=x(t0),u0=u(tk),xk=x(tk),uk=u(tk),(3)considering the neural network fθas an approximation of the function f, we can determine thesolution of x(tk)if we have prior knowledge of fθ:x(tk)=x(tk−1)+∫tktk−1fθ(x(t),u(t))dt, (4)thus, conventional numerical ODE solvers like the Euler, Runge-Kutta or fixed-adams algorithmscan be utilized to estimate the value of x(tk):ˆx(tk)=ODESolver(fθ,x(tk−1),(tk−1,tk)). (5)However,incaseswhere fθisimpreciseorremainsunknown,itbecomespossibletoassesstheerrorin estimating the boundary values:l=∥ˆx(tk)−x(tk)∥. (6)To update this model, fθ, we adopt a random selection approach, where only 25 points are chosenfromthegenerateddataset. Ateachtrainingstep,asinglepointfromthedatasetisselected. Thelossfunction is estimated using equations (2-6). This error is utilized in a supervised learning manner,and the model is trained using the adjoint sensitivity method [23] to ensure memory efficiency. It isimportanttonotethat,basedonequation(3),thecontrolinputsremainconstant,denotedas u0=uk,during each training step. To enhance learning efficiency, we have disregarded the dynamics of thecontrol inputs (i.e., ¤u=0) and focused solely on learning the input-output dynamics. Details oftraining and network architecture are presented in the appendix.2.3 Controller ArchitectureIn this section, we present a robot control methodology that utilizes the trained model, fθ, to steertherobottowardsarbitrarytrajectoriesinaccordancewithnon-convexcostobjectives. Theproposedapproach is a derivative-free, sampling-based Model Predictive Control (MPC) technique, knownas the Model Predictive Path Integral (MPPI) controller, which is capable of handling nonlineardynamics and non-convex cost objectives [24]. First, a Jacobian matrix, denoted as J, is definedthrough the utilization of the trained model, fθ, which maps the velocity of the robot’s end-effectorto the corresponding velocities in the configuration space:¤x=J¤u, (7)where Jisa3×3matrix, x∈R3andu∈R3representtherobot’send-effectorCartesiancoordinatesandtheinputspace,respectively. Inthiswork,weemployanumericalestimateoftheJacobianusingfinitedifferencemethodandbatchsamplingfromthetrainedmodel. Now,using(7),adiscrete-timestochastic dynamical system can be obtained:xt+1=xt+J(¤ut+δ¤ut) (8)whereδ¤utisthecontrolinputupdatewhichrepresentsbyazero-meanGaussiannoisevectorwithavariance ofΣu(δ¤ut∼N(0,Σu)). Using (8), the control problem can be formulated as a stochasticoptimal control problem. Given a finite time-horizon, t∈{0,1,2,...,T−1}, the objective of thecontrolleristodetermineanoptimalcontrolsequence, u=(u0,u1,...,uT−1)∈Rm×T,thatminimizes4the expected cost-to-go, S(τ), across all possible system trajectories, τ={x0,u0,x1,...,uT−1,xT},with respect to (8), by taking into account the state-dependent cost function, S(τ)∈R:J=minuE"φ(xT)+T−1∑︁t=0q(xt)+12uTtRut#, (9)whereφ(xT)is the terminal cost, q(xt)represents running cost and R∈Rm×mdenotes a positivedefinite control weight matrix. In order to solve this optimization problem, we adopt an iterativeupdate law, as derived in reference [24], for the implementation of the MPPI algorithm. Thisalgorithmiterativelyupdatesthecontrolsequenceforapredefinedtimehorizon,utilizingasuccessiveapproximation approach:ut←ut+ÍKk=1exp−(1/λ) ̃S(τt,k)δut,kÍKk=1exp−(1/λ) ̃S(τt,k), (10)whereKrepresents the number of samples, ̃S(τt,k)=φ(xT)+ÍT−1t=0 ̃q(xt,ut,δut)is a cost-to-goof thekthsample, and λ∈R+is a hyperparameter called inverse temperature. The cost-to-go function serves as a critical component in guiding the robot’s decision-making process, in-corporating multiple objectives such as tracking a reference trajectory, obstacle avoidance, andconsidering affordance. By evaluating the future costs associated with different control inputs,the cost-to-go function allows the robot to anticipate the consequences of its actions. It con-siders the desired trajectory as a reference, aiming to minimize the deviation from this pathwhile avoiding obstacles. Furthermore, the cost-to-go function can take into account the con-cept of affordance which encodes the relationships between actions, objects, and their result-ing effects [25]. The architecture of the proposed controller is depicted in Figure 1 (right).Motors and driving interfaceRacksExtensible soft manipulatorMarker Solid gripperBoxTubesXYZFigure 2: Experiment setup.3 ExperimentsHere, a series of experiments were conducted toassess the effectiveness of the proposed approachacross various scenarios. Figure 9 shows the ex-perimental setup employed in our study, compris-ing a flexible and expandable soft manipulator, aLogitech RGB camera positioned within the robot’sworkspace, and a user interface for recording, initi-ating,andterminatingtheexperiments. Toenablethecameratodetectthepositionoftherobot’stip,an ArUco marker [26, 27] is affixed to the tip of the robot, providing feedback to the control loop.Detailsoftherobot,thehyperparameters,thecontrollerconfiguration,andthenetworkstructurecanbe found in the appendix.3.1 Experiment DesignAseriesofexperimentshavebeenconductedtoassesstheeffectivenessoftheproposedframework:•Static Target Tracking: ten different targets are pre-defined within the robot’s workspace. Theobjective is to reach these targets with minimal positional error.•TrajectoryTracking: therobotissettotrackvarioustrajectoriesinboth2Dand3Dspace,whichinclude:i)a square on the X-Y plane with sides measuring 0.06 m; ii)A circular shape in the XYplane with a radius measuring 0.03 meters; iii)A triangle with all sides measuring 0.06 meters;iv)An eight-shaped curve defined by the equations x=acos2tπTandy=b2sin4tπT, wherea=0.03,b=0.05,T=100seconds, and tranges from 0 to 120 seconds; v)A helical trajectoryis executed along the Z-axis, characterized by a radius of 0.03 m and a pitch of 0.02 m. To assessthe tracking’s repeatability and accuracy, three trials are conducted for each trajectory.5Figure 3: Trajectory tracking results: the robot is set to track diverse trajectories in both two-dimensional (2D) and three-dimensional (3D) space. The solid-red and the dashed-black linesrepresent the actual and desired trajectories, respectively.•Obstacle Avoidance: in this task, the robot is set to track a helix trajectory characterized by aradius of 0.03 m and a pitch of 0.02 m which is near the border of its feasible kinematic space.The objective of this task is to show how safely the robot is tracking the desired trajectory whileavoiding obstacles without being unstable.•BoxOpeningandTestTubeManipulation: toshowcaseapotentialapplicationoftherobotandthecontroller’sadeptnessatincorporatingaffordanceinobjectmanipulationduringenvironmentalinteractions, we have designed a demanding teleoperation task consists of two sub-tasks: openingtheboxandpickingandplacingthetesttubesintotherack. Inthistask,theoperator’sobjectiveistomaneuvertherobottoopenthelidofawoodenbox,takingintoconsiderationthattherobotlacksthe strength to counteract the force of gravity and fully extend the lid. This challenging scenarioserves as a compelling demonstration of the controller’s ability to effectively utilize affordancewhile navigating the complexities of object manipulation in real-world environments.3.2 Results and DiscussionsTable 1: Static Target Tracking ResultsSSE ( mm)σ(mm)ST (sec) ̃x0.82 0.68 5.72 ̃y0.71 0.58 4.21 ̃z0.37 0.32 3.82In the context of static target tracking, the robot’s per-formance is assessed based on three metrics: steady-state error (SSE), standard deviation ( σ), and settlingtime (ST). The obtained results are showcased in Ta-ble 1. It is noteworthy that the position error remainsbelow the threshold of 1 mm. Additionally, the consistently low standard deviation reveals a stableerror pattern observed throughout all experiments.Intrajectorytrackingexperiments,theevaluationoftherobot’sperformanceinvolvesthecalculationand presentation of the root mean squared error (RMSE) and standard deviation ( σ) of the erroracross five distinct trials. These metrics, along with a visual depiction in Figure 3, provide acomparativeanalysisbetweentherobot’sactualtiptrajectoryandthedesiredtrajectory. Notably,therobot successfully tracks various shapes, including square, circle, triangle, eight, and helix, with amaximum RMSE of 3.2×10−3m. Results are summarised in Table 2.Table 2: Trajectory Tracking ResultsRMSE (mm) σ(mm) ̃x ̃y ̃z ̃x ̃y ̃zSquare 2.402.602.501.401.501.30Circle 2.803.203.100.511.501.50Triangle 1.101.101.600.900.901.50Eight 2.002.102.500.500.501.50Helix 1.101.101.800.521.500.61Representative results depicted in Figure 4showcase the effectiveness of the MPPI algo-rithm in achieving obstacle avoidance while si-multaneouslymaintainingstabletrackingofthedesiredtrajectories. Inourimplementation,therunning cost is the sum of the individual costterms including a term for tracking a referencetrajectory, a term for avoiding obstacles, and aterm for penalizing jerky motions (details are presented in the appendix). The running cost plays avitalroleinguidingtherobot’sbehaviorbyinfluencingthecontrolactionstominimizedeviationsandensureasafeandefficientpath. Byincorporatingthepositionoftheobstaclesintotherunningcost,the robot can effectively evaluate the consequences of its actions and make decisions that prioritizeobstacle avoidance and trajectory tracking. This approach enables the robot to dynamically adjustits control inputs based on real-time feedback, resulting in reliable and robust performance even incomplex scenarios.6Figure4: Representativeresultsofobstacleavoidanceexperiments. Bluedotsrepresenttheobstacles,solid-red and dashed-black lines are the actual and reference trajectories, respectively.3 2 1 4 5 6Figure 5: This set of snapshots illustrates the box opening and test tube manipulation task. Inthis particular task, the operator’s objective is to guide the robot in opening the lid of the box andsubsequently picking and placing the tubes into the designated racks.In the Box Opening and Test Tube Manipulation task, the robot operates in a unidirectional tele-operation mode, where the designed controller enables it to track targets specified by an operatorthrough keyboard input. The complexity of this task arises primarily from the absence of directforce feedback, the disparity in kinematics between control interfaces and the robot, the delayedresponse time, and the lag between the operator’s commands and the robot’s actions. To addressthese challenges, we incorporate a set of affordance terms into the running cost of the controllers(details are provided in the appendix). These affordance terms can be selectively activated or de-activated by the operator to limit the motion of the robot along one direction deemed suitable toaccomplish the task (i.e, x,y, orz), depending on the task phase. The versatility of MPPI, whichcan handle non-convex running costs, allows us to effectively utilize these affordance terms for amore intuitive and context-aware interaction between the operator, the robot, and the environment,enabling more effective and efficient teleoperation. A video showing the results is available onlinehttps://youtu.be/6tYS-5tkoQg and Figure 5 shows a set of snapshots of this task.4 Ablation and Comparison StudyInthissection,weconductanablationstudytothoroughlycomparetheperformanceoftheproposedANODE-based forward kinematics model with Feedforward Neural Network (FNN) and RecurrentNeural Network (RNN) based models in four distinct scenarios. We assess the performance ofthe models in open-loop and closed-loop trajectory tracking scenarios for both unseen-before andseen-before scenarios, providing a comprehensive analysis of their strengths and limitations. Inthe open-loop scenarios, we opted to exclude the MPPI controller and instead utilized a simplifiedapproach. Specifically,weemployedtheequation( ¤u=J+¤x),where J+representsthepseudo-inverseof the Jacobian and ¤xis the reference trajectory. By implementing this modified strategy, we aimedto observe the system’s response without the influence of the MPPI controller, focusing solely onthe FK models. In the seen-before scenario, the robot was tasked with tracking an eight-trajectoryin X-Y plane within the demonstration space. Conversely, in the unseen-before scenario, the robotwas challenged to track a 3D eight-trajectory spanning the entire feasible kinematic space. Thesedistinct scenarios were designed to assess the robot’s ability to adapt and generalize its knowledgeacross different dimensions and spatial configurations. By evaluating its performance in both seenand unseen trajectories, we gain valuable insights into the framework’s capacity to handle novelsituations and extrapolate its learned behaviors to unfamiliar scenarios.Furthermore, by comparing ANODE’s performance with the traditional MLP and RNN mod-els, we aim to highlight the unique advantages of the ANODE model in capturing the un-derlying dynamics and temporal dependencies inherent in soft robot forward kinematics pre-diction. The FNN is constructed with multilayer perceptrons (MLP) to map the input utto7Figure 6: Results of the ablation study: the dashed-black lines, the solid-red and solid-blue linescorrespond to the reference, closed-loop and open-loop actual trajectories, respectively.the outputxt. On the other hand, the RNN adopts a NARX architecture, which takes thecurrent state(xt,ut)as input and predicts the subsequent state xt+1. Both the FNN andRNN models are trained using the dataset Dgenerated in accordance with the methodol-ogy described in Section 2, employing mean square error as their respective loss functions.Table 3: Ablation and comparison results of trajectorytracking scenarios; -O/-C tokens refer to open-loop andclosed-loop results, respectively.RMSE (mm) σ(mm) ̃x ̃y ̃z ̃x ̃y ̃zseenMLP-C 0.5310.5480.5700.5310.5480.869MLP-O 0.5720.5630.7020.5570.5430.494RNN-C 0.5420.5375.1000.5410.5370.498RNN-O 0.5760.5465.0000.5440.5400.419ANODE-C 0.1050.1270.1160.1050.1270.116ANODE-O 0.1980.1770.1220.1980.1770.121unseenMLP-RNN-(C/O) ------ANODE-C 0.2560.1640.1570.2560.1640.157ANODE-O 5.6003.1004.9004.7001.8002.900To ensure a fair and equitable compari-son, we maintained consistent networksizes across all methods. We trainedthree distinct FK models and integratedthem within the identical control loopframework depicted in Figure 1 (right).Toevaluateandcontrasttheirrespectiveperformances, we computed the RMSEand standard deviation ( σ) of errorsacross five trials, summarizing the out-comes in Table 3. Representative out-comes are visually depicted in Figure 6,providingfurtherinsightsintothetrajectory-trackingcapabilitiesofeachmodel. Asdepictedinthisfigure, all the models demonstrated proficiency in successfully accomplishing the seen-before sce-nario. Notably, among the tested methods, the ANODE(both open/closed loop versions) showcasedsuperior performance, outperforming the other models in accurately accomplishing the task. In theunseen-before scenario, ANODE stood out as the only model capable of effectively generalizing itsknowledge and successfully accomplishing the task. On the other hand, the MLP and RNN modelsencountered difficulties, leading to the robot becoming uncontrollable. The results showed that theproposed method outperformed alternative approaches significantly in unseen-before scenarios andperformed slightly better in seen-before scenarios.5 Conclusion and LimitationsThis paper introduced a new method for modelling the continuous forward kinematic models ofsoft continuum robots using ANODE. The proposed method only required 25 scattered data points.Additionally, we developed a parallel MPPI-based controller running on a GPU, which effectivelyhandlesanon-convexobjectivefunction. Thisdesignenhancestheadaptabilityandrobustnessofthelearned model, enabling accurate prediction and control of soft continuum robot motion in variousnew scenarios. Through extensive experimentation, ablation, and comparison studies, our pro-posed framework (ANODE+MPPI) exhibits superior performance over learning-based approacheslike FNN and RNN in unseen-before scenarios. It also slightly outperforms them in seen-beforesettings. Whiletheseresultsarepromising,therearesomechallengesandlimitations. Thesoftrobotmanipulator being studied has a limited input space comprising three variables. However, whenapplied to more complex robotic systems characterized by higher dimensions, the Neural ODEsencounters certain limitations. Specifically, the computational costs associated with training NeuralODEs become more challenging. This is attributed to the time-consuming nature of the forwardpass, which necessitates the numerical integration of an ODE. Additionally, the proposed method-ology assumes the presence of continuous dynamics within the robot, rendering it less suitable foreffectivelymodellingsoftrobotsthatexhibitsuddenchanges,instabilities,ordiscontinuitiesintheirbehaviour.86 AcknowledgementThis work is supported by EU H2020 project Enhancing Healthcare with Assistive Robotic MobileManipulation (HARMONY, 101017008) and the Medical Research Council [MR/T023252/1].References[1] C. Lee, M. Kim, Y. J. Kim, N. Hong, S. Ryu, H. J. Kim, and S. Kim. Soft robot review.International Journal of Control, Automation and Systems , 15(1):3–15, Feb 2017.[2] G.Mengaldo,F.Renda,S.L.Brunton,M.Bächer,M.Calisti,C.Duriez,G.S.Chirikjian,andC. Laschi. A concise guide to modelling the physics of embodied intelligence in soft robotics.Nature Reviews Physics , 4(9):595–610, Sept. 2022.[3] C.DellaSantinaandD.Rus.Controlorientedmodelingofsoftrobots: thepolynomialcurvaturecase.IEEE Robotics and Automation Letters , 5(2):290–298, 2019.[4] R. K. Katzschmann, M. Thieffry, O. Goury, A. Kruszewski, T.-M. Guerra, C. Duriez, andD. Rus. Dynamically closed-loop controlled soft robotic arm using a reduced order finiteelementmodelwithstateobserver. In 20192ndIEEEinternationalconferenceonsoftrobotics(RoboSoft) , pages 717–724. IEEE, 2019.[5] C. Della Santina, R. K. Katzschmann, A. Biechi, and D. Rus. Dynamic control of soft robotsinteracting with the environment. In 2018 IEEE International Conference on Soft Robotics(RoboSoft) , pages 46–53. IEEE, 2018.[6] C. Della Santina, R. K. Katzschmann, A. Bicchi, and D. Rus. Model-based dynamic feedbackcontrol of a planar soft robot: trajectory tracking and interaction with the environment. TheInternational Journal of Robotics Research , 39(4):490–513, 2020.[7] C.Armanini,C.Messer,A.T.Mathew,F.Boyer,C.Duriez,andF.Renda.Softrobotsmodeling:a literature unwinding, 2021.[8] M.Kasaei,K.K.Babarahmati,Z.Li,andM.Khadem. Data-efficientnon-parametricmodellingand control of an extensible soft manipulator. In 2023 IEEE International Conference onRobotics and Automation (ICRA) , pages 2641–2647. IEEE, 2023.[9] B. Thamo, F. Alambeigi, K. Dhaliwal, and M. Khadem. A hybrid dual jacobian approachfor autonomous control of concentric tube robots in unknown constrained environments. In2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages2809–2815, 2021. doi:10.1109/IROS51168.2021.9636085.[10] J. M. Bern, Y. Schnider, P. Banzet, N. Kumar, and S. Coros. Soft robot control with a learneddifferentiablemodel. In 20203rdIEEEInternationalConferenceonSoftRobotics(RoboSoft) ,pages 417–423. IEEE, 2020.[11] T.G.Thuruthel,E.Falotico,F.Renda,andC.Laschi. Learningdynamicmodelsforopenlooppredictive control of soft robotic manipulators. Bioinspiration & biomimetics , 12(6):066003,2017.[12] D. Bruder, C. D. Remy, and R. Vasudevan. Nonlinear system identification of soft robotdynamics using koopman operator theory. In 2019 International Conference on Robotics andAutomation (ICRA) , pages 6244–6250. IEEE, 2019.[13] R.Morimoto,S.Nishikawa,R.Niiyama,andY.Kuniyoshi. Model-freereinforcementlearningwith ensemble for a soft continuum robot arm. In 2021 IEEE 4th International Conference onSoft Robotics (RoboSoft) , pages 141–148. IEEE, 2021.9[14] D. Büchler, S. Guist, R. Calandra, V. Berenz, B. Schölkopf, and J. Peters. Learning to playtable tennis from scratch using muscular robots. IEEE Transactions on Robotics , 2022.[15] T.G.Thuruthel,E.Falotico,F.Renda,andC.Laschi. Model-basedreinforcementlearningforclosed-loop dynamic control of soft robotic manipulators. IEEE Transactions on Robotics , 35(1):124–134, 2018.[16] T. George Thuruthel, F. Renda, and F. Iida. First-order dynamic modeling and control of softrobots.Frontiers in Robotics and AI , 7:95, 2020.[17] G.Fang,Y.Tian,Z.-X.Yang,J.M.Geraedts,andC.C.Wang. Efficientjacobian-basedinversekinematics with sim-to-real transfer of soft robots by learning. IEEE/ASME Transactions onMechatronics , 2022.[18] W. Xu, J. Chen, H. Y. Lau, and H. Ren. Data-driven methods towards learning the highlynonlinearinversekinematicsoftendon-drivensurgicalmanipulators. TheInternationalJournalof Medical Robotics and Computer Assisted Surgery , 13(3):e1774, 2017.[19] E.Dupont,A.Doucet,andY.W.Teh. Augmentedneuralodes. Advancesinneuralinformationprocessing systems , 32, 2019.[20] R. T. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differentialequations. Advances in neural information processing systems , 31, 2018.[21] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition , pages 770–778, 2016.[22] F.BashforthandJ.C.Adams. Anattempttotestthetheoriesofcapillaryactionbycomparingthe theoretical and measured forms of drops of fluid . University Press, 1883.[23] L. S. Pontryagin. Mathematical theory of optimal processes . CRC press, 1987.[24] G. Williams, A. Aldrich, and E. A. Theodorou. Model predictive path integral control: Fromtheory to parallel computation. Journal of Guidance, Control, and Dynamics , 40(2):344–357,2017.[25] L. Montesano, M. Lopes, A. Bernardino, and J. Santos-Victor. Learning object affordances:from sensory–motor coordination to imitation. IEEE Transactions on Robotics , 24(1):15–26,2008.[26] S. Garrido-Jurado, R. M. noz Salinas, F. Madrid-Cuevas, and M. Marín-Jiménez. Auto-matic generation and detection of highly reliable fiducial markers under occlusion. PatternRecognition , 47(6):2280 – 2292, 2014. ISSN 0031-3203. doi:http://dx.doi.org/10.1016/j.patcog.2014.01.005. URL http://www.sciencedirect.com/science/article/pii/S0031320314000235 .[27] S.Garrido-Jurado,R.M.nozSalinas,F.Madrid-Cuevas,andR.Medina-Carnicer. Generationof fiducial marker dictionaries using mixed integer linear programming. Pattern Recognition ,51:481 – 491, 2016. ISSN 0031-3203. doi:http://dx.doi.org/10.1016/j.patcog.2015.09.023.URL http://www.sciencedirect.com/science/article/pii/S0031320315003544 .[28] S. Antman. Nonlinear Problems of Elasticity; 2nd ed. Springer, Dordrecht, 2005. doi:10.1007/0-387-27649-1. URL https://cds.cern.ch/record/1250280 .10Appendix1 Robot PrototypeFigure 7: Prototype of the flexible roboticarm composed of a reinforced multi-backbone robot. The robot is connected tofour brushless DC motors using lead screws.An ArUco marker [26, 27] is placed on therobot tip, and a camera is used to track themarker’s position.Therobot,asdepictedinFigure9,consistsofaflex-ible backbone rigidly affixed to spacers, accompa-niedbyfourrodsfixedattheendspacerandpassingthrough the remaining spacers with sufficient clear-ance, forming the primary body of the robot.To drive the robot, four brushless DC motors fromMaxon Motors, equipped with quadratic encodersand 150:1 reduction gearheads, are utilized. Precisemotor position control is achieved through four PIDposition controller modules (EPOS4 Compact 50/5CAN),whichreceiveencoderfeedbackandcommu-nicatewithaPCusingtheCANprotocoltoestablishandretrievecontrollerset-pointsandconfigurations.Lead screws, connected to braided tubes via 3Dprintedconnectors,areattachedtothemotorstocon-vert motor power into tube-pulling and pushing ac-tions. Aschematicoftherobotisshownin Figure8.2 Network Architecture and TrainingCross SectionMain BackboneBraided TubesFigure 8: Schematic of the robot.Table 4 presents a summary of the hyperparameters andnetwork structure. It should be noted that we employed anearly-stoppingtechniquetopreventoverfittingwhentrainingthe model. With early stopping, the model’s training ishalted before it starts to overfit the training data, even if alliterations or epochs have not been completed. This allowsthemodeltoavoidmemorizingthetrainingdataexcessivelyand improves its ability to generalize to new, unseen data.Table 4: Hyperparameters and network structure.Hyperparameter valueNo. of hidden neuron ( θ)112 (64,32,16)Augmented vector size (p) 64No. of hidden layers 3Activation functions ELULearning rate 0.001Type of ode-solver fixed-adamsAbsolute tolerance for ode-solver 1e-9Relative tolerance for ode-solver 1e-7Number of iteration 90003 Controller ConfigurationThis section will provide the details of the controller configurations including its hyperparameters,running cost, and terminal cost functions. The dynamics of the controlled system is captured by thetrained FK model (ANODE), while the running cost and terminal state cost are defined as follows:11•Running cost: our running cost function is composed of three costs and defined as follows:cost_tracking =wtracking·∥x−xreference∥2cost_obstacles =wobstacle·((d1<0.01)+(d2<0.01))cost_jerk=wjerk·∥u−uprevious∥2cost_affordance =waffordance·affordance_measurerunning_cost =cost_tracking+cost_obstacle+cost_jerk+cost_affordancewhere xrepresentsthecurrentstateofthesystem, xreferenceisthecorrespondingstateinthereferencetrajectory, udenotes the current control input, and upreviousrepresents the previous control input.The weights wtracking,wobstacle, andwjerkcontrol the importance of each term in the overall costfunction.waffordancedetermines a suitable metric or measure that quantifies the affordance forthe given task or goal. The first term penalizes the deviation of the reference trajectory. Thesedeviations are weighted by a factor of 200, encouraging the system to closely follow the desiredtrajectory. Thesecondtermisapenaltytermthatconsidersthedistancebetweenthecurrentstatesandtwoobstaclelocations,denotedas d1andd2. Ifthedistancetoeitherobstacleislessthan0.01,ahighpenaltyof100,000isadded. Thisincentivizesthesystemtoavoidapproachingtheobstaclestooclosely. Todiscouragejerkyandabruptmovements,weconsideredanotherpenaltyterm. Thistermpenalizes highratesofchange inaccelerationor controlinputs. Inourimplementation, wjerkis set to 0.1.•Terminal cost: our terminal cost is defined as: terminal_cost =wterminal·∥x−xgoal∥2, wherewterminalis the weighting factor that controls the importance of the terminal cost.Theλparameter was set to 1 to balance the importance between the running cost and terminal statecost. Thecontrolinputswereconstrainedwithintherangedefinedbyumin=[-0.01,-0.01,-0.01]andumax = [0.01,0.01,0.01]. Gaussian noise with a standard deviation of Σu=0.001∗torch.eye(3)was added to control samples for exploration. The MPPI optimization process involved generating500controlsamplesperiteration,withapredictionhorizonof10timesteps. Theseparametervalueswere chosen to achieve effective control performance and can be fine-tuned for specific applicationrequirements.4 AffordanceIn the context of robotics, an affordance is a relationship between an actor (i.e., robot), an actionperformed by the actor, an object on which this action is performed, and the observed effect [25].The general idea of the affordance theory can be used in robotics to provide some information ofmapping between objects, agents and the actions they can take on each other, as there is no unifiedformalization of it in robotics. In our implementation, we incorporate a set of affordance terms(penalties for violating the motion restrictions) into the running cost of the controllers which canbe selectively activated or deactivated by the operator, depending on the task phase. Thanks to theversatilityofMPPI,whichcanhandlenon-convexrunningcosts,allowsustoeffectivelyutilizetheseaffordance terms for a more intuitive and context-aware interaction between the operator, the robot,and the environment, enabling more effective and efficient teleoperation. By adding the affordancemeasure to the running cost, we give more weight to actions that align with the desired affordance.5 Modeling the Entire Body of the RobotOnecanmodeltheentirebodyofasoftrobotasacontinuous3Dcurve. Tothisend,theconfigurationof the main backbone can be defined using a unique set of 3D centroids, r(s,t):[0,l]×[0,∞]→R3×[0,∞],andafamilyoforthogonaltransformations, R(s,t):[0,l]×[0,∞]→ so(3)×[0,∞]whereldenotes length of the robot. The shape of the main backbone is defined byr′(s,t)=R(s,t)e3,R′(s,t)=R(s,t)[u(t)]×, (11)12Figure 9: Simulation setup: the simulated robot is tracking a helical trajectory.u(t)=[ux(t),uy(t),0]Tis the curvature vector of the deformed backbone. [.]×operator is theisomorphismbetweenavectorin R3anditsskew-symmetriccrossproductmatrix,and e3=[0,0,1]Tistheunitvectoralignedwiththez-axisoftheglobalcoordinateframe. ThiscanbeformulatedasanANODEproblemandwecancalculatetherobotend-effector’spositionastheCartesiancoordinatesof the robot’s tip:x(t)=r(l(t),t)=∫uz(t)0r(s,t)ds. (12)The current system has 14 states including a 3D position, a rotation matrix, and two inputs. Tovalidate the answer, we need a dataset to train a new ANODE, and test its performance. To thisend, we developed a simulated version of our robot in the Pybullet simulator and we assumed thatthe robot can be modeled using Cosserat-rod theory [28]. The code is available online and can bedownloadedfromhere 1. Figure9showsasetofsnapshotsofthesimulatedrobotwhileperforminga trajectory-tracking task. This simulation enabled the dynamic generation of shape configurationdatabatches,whereweproduced100randomshapeconfigurationsbysampling ux,uyfromuniformdistributions U(−0.01,0.01)andsettinguzusing torch.linspace(0., 0.05, 100) . Fortesting,we expanded the input boundaries to incorporate unseen configurations, adjusting ux,uyto samplefromU(−0.015,0.015)anduzto span torch.linspace(0., 0.07, 480) . For performanceassessment, we executed 50 tests, predicting a span of 7 cm, divided into 480 steps. The resultingaverageMeanSquaredError(MSE)was0.00001494whichprovesthegoodperformanceofANODEto approximate the shape of a soft robot. Two representative results are shown in Figure 10.0.000.01position (m)prediction xtrue x0.0100.0050.000position (m)prediction ytrue y0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07length (m)0.0000.0250.050position (m)prediction ztrue z0.020.010.00position (m)prediction xtrue x0.020.010.00position (m)prediction ytrue y0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07length (m)0.0000.0250.050position (m)prediction ztrue zFigure 10: Two representative results of shape reconstruction.6 Neural ODE vs Augmented Neural ODENeural ODEs and ANODEs belong to the same family of models. They both use differentialequations to model the change in a system over time. The difference between these two lies in howtheyhandletheevolutionofthesystem’sstate. ANeuralODEallowsthestatetoevolveintheoriginalstate space, while an ANODE introduces auxiliary dimensions, allowing the state to evolve in thisaugmented space. To evaluate their performances in our case, we began by evaluating the NODE’s1https://github.com/MohammadKasaei/SoftRobotSimulator13capability. We observed a slight decrease in performance, particularly in open-loop scenarios withpreviously unseen conditions. Subsequently, to underscore ANODE’s potential in comparison withits counterparts (i.e. MLP, RNN, ResNet), we ventured into a more intricate task — modeling therobot’s entire body which has been discussed in the previous section. After generating the datasetand training the models, for performance assessment, we executed 50 tests, predicting a span of7cm,dividedinto480steps. Thesepredictionswerebasedoncontrolinputsdrawnrandomlyfromauniformdistribution U(−0.015,0.015)cm,whichalsocoveredunseenconfigurations. TheresultingaverageMeanSquaredError(MSE)forANODEstoodat0.00001494,distinctlylowerthanNODE’s0.00172892. A visual comparison of two exemplary outcomes is depicted in Figure 11. Evidently,ANODEoutshoneinthissetup,reaffirmingitssuperiorstabilityandgeneralizationcapabilitiesoverNODE.0.000.01position (m)prediction xtrue x0.0100.0050.000position (m)prediction ytrue y0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07length (m)0.0000.0250.050position (m)prediction ztrue z0.000.02position (m)prediction xtrue x0.000.020.04position (m)prediction ytrue y0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07length (m)0.000.05position (m)prediction ztrue zANODE NODEFigure 11: Two representative results of shape reconstruction using ANODE and NODE.14 |
j2AQ-WJ_ze | Language-guided Robot Grasping: CLIP-basedReferring Grasp Synthesis in ClutterGeorgios Tziafas1∗Yucheng Xu2∗Arushi Goel2Mohammadreza Kasaei2Zhibin Li3Hamidreza Kasaei11University of Groningen2University of Edinburgh3University College London1{g.t.tziafas, h.kasaei }@rug.nl2{Yucheng.Xu, A.Goel-1, m.kasaei }@ed.ac.uk3alex.li@ucl.ac.ukAbstract: Robots operating in human-centric environments require the integra-tion of visual grounding and grasping capabilities to effectively manipulate objectsbased on user instructions. This work focuses on the task of referring grasp syn-thesis , which predicts a grasp pose for an object referred through natural languagein cluttered scenes. Existing approaches often employ multi-stage pipelines thatfirst segment the referred object and then propose a suitable grasp, and are evalu-ated in simple datasets or simulators that do not capture the complexity of naturalindoor scenes. To address these limitations, we develop a challenging benchmarkbased on cluttered indoor scenes from OCID dataset, for which we generate refer-ring expressions and connect them with 4-DoF grasp poses. Further, we proposea novel end-to-end model (CROG) that leverages the visual grounding capabili-ties of CLIP to learn grasp synthesis directly from image-text pairs. Our resultsshow that vanilla integration of CLIP with pretrained models transfers poorly inour challenging benchmark, while CROG achieves significant improvements bothin terms of grounding and grasping. Extensive robot experiments in both simu-lation and hardware demonstrate the effectiveness of our approach in challenginginteractive object grasping scenarios that include clutter.Keywords: Language-Guided Robot Grasping, Referring Grasp Synthesis, VisualGrounding1 IntroductionPass the white box infront of the CokeFigure 1: 4-DoF referring grasp syn-thesis in clutter.Recent advancements in deep learning have paved the wayfor substantial breakthroughs in data-driven robotic grasp-ing. Several works have proposed to synthesize grasps frompurely visual inputs [1, 2, 3, 4]. In parallel, there is emerg-ing work attempting to ground robotic perception [5, 6, 7]and action [8, 9] in natural language, aiming to enhance theability of robots to interact with non-expert human users. Inthis work, we propose to bridge these two avenues via thetask of referring grasp synthesis , where the robot is able tograsp a targeted object of interest that is indicated verballyby a human user (see Fig. 1). We focus on investigating thistask in natural indoor scenes which include ambiguity andclutter, and are more realistic.Most existing approaches study interactive grasping scenarios via multi-stage pipelines [10, 11, 12,13, 14], where first the target object is localized from a linguistic referring expression (i.e. visual*Equal contribution7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.grounding ) and another module predicts a suitable grasp pose in a second step. The visual groundingmodels are trained either in benchmarks such as RefCOCO [15, 16, 17], which contain mostlyoutdoor scenes with few graspable objects, or custom datasets which limit the applicability of thelearned model to fixed lab setups. Other robotics-related datasets collect language-annotated indoorscenes [5, 6, 7], but are not directed towards grasping, or contain grasp annotations but lack language[18, 19]. Additionally, related datasets that study clutter do not explicitly study ambiguous objectsin the scene [7, 20], i.e., objects of the same category appearing multiple times, and hence annotateonly for such a category and a few attributes. They also mostly consider only pair-wise spatialrelations between objects, which is not the case in free-form language ( e.g. ”leftmost bowl” is morenatural than ”bowl left from other bowls” ).Alternatively, a recent trend in language-based robot systems [21, 22] is to combine language mod-els [23] with pretrained vision-language foundation models such as CLIP [24] for zero-shot ground-ing, and CLIP-based end-to-end grasping policies trained via imitation learning [25, 26]. Such ap-proaches achieve impressive results but evaluate mostly in simulators which are fairly distant fromnatural realistic scenes, making the sim-to-real transfer more challenging.To tackle the above limitations, we establish a new challenging dataset, OCID-VLG , that studies end-to-end vision-language-grasping in natural cluttered scenes. The dataset connects grasp annotationsfrom the OCID-Grasp dataset [19] with referring expressions that include rich attribute vocabulary,model a broad range of relations, and explicitly consider ambiguity. Further, we propose an end-to-end model ( CLIP-based Referring Object Grasping - CROG ), that extends CLIP’s visual groundingwith both pixel-level segmentation, as in [27], and grasp synthesis tasks, via a novel multi-taskobjective. Experimental evaluations show that the proposed model is robust to referring expressioncomplexity, and outperforms previous baselines that rely on vanilla integration of CLIP with themulti-stage approach. Extensive robot experiments demonstrate the effectiveness of the proposedmodel in challenging interactive grasping scenarios, in both simulation and real-world settings.In summary, the main contributions of this work are: a) a new challenging dataset for visual ground-ing and referring grasp synthesis in cluttered scenes, comprising approximately 90 000 language-mask-grasp annotations, b) an end-to-end vision-language-grasping model, CROG, which efficientlylearns a grasp policy by leveraging the powerful image-language alignment of CLIP, and demon-strate its performance merits compared to multi-stage baselines that utilize pretrained models, and,c) applying our proposed model in challenging interactive table cleaning scenarios through bothsimulation and real robot experiments.2 Related WorksReferring Expression Datasets Referring expressions are natural language descriptions thatuniquely identify a target region in a paired image, often by referring to object attributes and spatialrelations. Several datasets [28] have been proposed in the past, with expressions and target boundingboxes / masks annotated manually [29, 30] or via a two-player game [31]. Most popular benchmarksinclude Flickr30k-Entities [17] and the RefCOCO(/+,/g) suite [15, 32], containing annotations forMSCOCO [33] scenes, collected via the refer-it game strategy [31]. Alternatively, automatic re-ferring expression generation is pursued via the usage of symbolic scene graph annotations andsynthetic language templates [34, 35, 36, 16]. Above benchmarks concern referring expressions forRGB images with generic content and are mostly for outdoor scenes. Recent works propose datasetswith referring expressions for objects in indoor environments and RGB-D / 3D visual data [7, 5, 6],but do not consider clutter and are not connected with robot grasping. In our work, we adopt theautomatic generation method of CLEVR-Ref [36] to generate expressions for extracted scene graphsfrom OCID-Grasp dataset [19]. To the best of our knowledge, OCID-VLG is the first dataset thatbrings together referring expressions and grasp synthesis for cluttered indoor scenes.Visual Grounding Visual grounding is formulated in literature through the tasks of referring ex-pression comprehension and referring image segmentation, depending on the type of localizationrequired (box and mask respectively). Methods usually employ a two-stage detect-then-rank ap-proach [37, 38, 39, 40], first generating object proposals and then ranking them according to their2correspondence to the expression. Single-stage methods [41, 42] alleviate the object proposal step bydirectly fusing vision-text features in a joint space. Recently, the Transformer architecture has beenemployed for both task variants separately [3, 43, 44], or jointly [45, 46], showcasing strong cross-modal alignment capabilities compared to previous CNN-LSTM fusion techniques. The groundingtask has been recently adapted for 3D data [5], with similar two-stage methods fusing text featureswith point clouds [47, 48] or RGB-D views [49, 6]. Finally, transferring from large-scale vision-language pretraining [50, 51, 52, 24] is a common practice for usage in zero-shot [21, 22], or asan initialization for finetuning [25]. Similarly, in our work, we finetune the CLIP vision-languagemodel [24] to further learn 4-DoF grasp synthesis in RGB.Grasp Synthesis Grasp synthesis enables robots to determine the optimal way to grasp objects byconsidering visual information. Current grasp synthesis methods can be roughly categorized into 4-DoF and 6-DoF [53], according to the degrees of freedom (DoF) of the grasp configurations. 4-DoFgrasp synthesis [54, 19, 55] defines grasps by a 3D position and a top-down hand orientation ( yaw),which is also commonly referred as a “ top-down grasp ”.6-DoF grasp synthesis [18, 56, 57] definesgrasp poses by 6D positions and orientations. Early works [1, 2] formulate 4-DoF grasp detectionvia decoding a set of grasp masks from the input RGB-D images and use camera calibration totransform the planar grasp into a gripper pose. Det-Seg [19] proposed a two-branch frameworkthat generates semantic segmentation masks and uses them to refine the predicted grasps. SSG [58]introduced an instance-wise 4-DoF grasp synthesis framework and showed its effectiveness androbustness in cluttered scenarios. In this work, we build on the idea of using the segmentation maskas an extra signal for learning grasp synthesis, by making the masks object-specific via groundingthem from referring expressions.3 OCID-VLG DatasetVisual grounding and grasp synthesis are mostly studied separately, and hence associated groundingdatasets rarely involve cluttered indoor scenes [15, 17, 16] and lack grasp annotations [5, 6, 7],while grasp synthesis datasets lack language-grounding [55, 54, 18, 19]. Our proposed dataset,OCID-VLG ( Vision- Language- Grasping), aims to cover this gap, by providing a joint dataset forboth grounding and grasp synthesis in scenes from OCID dataset [59]. The dataset consists of 1 763indoor tabletop RGB-D scenes with high clutter, including 31object categories from a total of 58unique instances. The OCID object catalog includes several object instances of the same categorythat vary in fine-grained details, granting it a desirable domain for integration with language. Wemanually annotate the catalog with a rich variety of object-related concepts, as well as pair-wise andabsolute spatial relations. For each scene, we provide 2D segmentation masks and bounding boxes(at both category and instance level), as well as a complete parse of the scene, providing all 2D/3Dlocations, category, attribute and relation information for each object in a symbolic scene graph. Weleverage the previous 75khand-annotated 4-DoF grasp rectangles of OCID-Grasp [19] and connecteach object in our scene graph with a set of grasp annotations. The grasp-annotated scene graphsare used to generate referring expressions with a custom variant of the CLEVR data engine [35]. Weend up with 89,639unique scene-text-mask/box-grasp tuples, aimed to supervise both groundingand grasp synthesis tasks in an end-to-end fashion.3.1 Referring Expression GenerationWe first annotate a catalog of attribute and relation concepts that are used to refer to ambiguousobjects in OCID-VLG scenes. For attributes, each object is annotated for its color, as well as with aninstance-level description that refers to some object’s property (e.g. brand, flavor, variety, maturity,function, texture, or shape). We note that not all objects are annotated for all mentioned concepts, butonly for those that discriminate them from other objects of the same category. For spatial relations,we include both relative (e.g. ”bowl left from mug” ) as well as absolute location (e.g. ”leftmostbowl” ) concepts. We adapt the relation resolution heuristics of [44] and use the relation set {“right”,“rear right”, “behind”, “rear left”, “left”, “front left”, “front”, “front right”, “on” }, but augmentit with the absolute location set {“leftmost”, “rightmost”, ”furthest”, ”closest” }.3Dataset ClutterVision Ref.Expr. Grasp Num.Obj. Num. Num.ParsesData Annot. Annot. Categories Scenes Expr.RefCOCO [15] % RGB box,mask % 80 19 .9k142.2k%Flickr30k-Entities [17] % RGB box % 44.5k 31.7k158.9k%CLEVR-Ref+ [36] % RGB box,mask % - 60k 600k"Cops-ref [16] % RGB box,mask % 508 703 148 .7k"ScanRefer [5] % 3D box % 250 800 51 .5k%ReferIt-RGBD [6] % RGB-D box % - 7.6k 38.4k%Sun-Spot [7] " RGB-D box % 38 1 .9k 7.9k%OCID-Ref [20] " RGB,3D box % 58 2 .3k 305.6k%Cornell [55] % RGB-D % 4-DoF 240 1 k - %Jacquard [54] % RGB-D % 4-DoF - 54k - %GraspNet [18] " 3D % 6-DoF 88 190 - %OCID-Grasp [19] " RGB-D % 4-DoF 31 1 .7k - %OCID-VLG (ours) " RGB-D,3D box,mask 4-DoF 31 1 .7k 89.6k"Table 1: Comparison of main features and statistics between existing 2D / 3D visual grounding and graspsynthesis datasets with our OCID-VLG.The complete list of the concept vocabulary and related statistics are provided in Appendix A.After parsing scenes into scene graphs, we sample object and relation concepts to generate referringexpressions using the CLEVR data engine [35] and custom templates, that follow the structure:[prefix] ([LOC 1] [ATT 1]) [OBJ 1] ((that is) [REL] the ([LOC 2] [ATT 2]) [OBJ 2])where [OBJ], [ATT], [REL], [LOC] denote an object concept (category or instance-level), anattribute (color), a pair-wire relation and an absolute location respectively. The [prefix] is sampledfrom a set of general robot directives, e.g. ”Pick the” . We construct template variations for 5families, namely: a) name , (e.g. ”chocolate corn flakes” ), b) attribute , (e.g. ”brown cereal boxpackage” ), c) relation , (e.g. ”corn flakes behind the bowl” ), d) location , (e.g. ”closest cerealbox” ) and e) mixed , (e.g. ”cereal box to the rear left of the right apple” ), for a total of 56 distinctsub-templates. Note that templates (b)-(e) are constrained to only sample target objects that areambiguous in the scene, hence attribute or relation information are needed to uniquely ground them.3.2 Comparisons with Existing DatasetsOCID-VLG differs from existing datasets in several aspects and statistics, summarized in Table 1.Popular visual grounding datasets [15, 32, 17, 16] usually include few indoor scenes with clutteredcontent and provide only RGB data, limiting their applicability in the robotics domain. Robotics-related grounding datasets usually contain referring expressions for objects in room layouts [5, 6](e.g. furniture), which are not directed towards grasping and do not consider clutter. Sun-Spot [7]contains tabletop cluttered scenes, but doesn’t annotate segmentations and grasps and lacks absolutelocation annotations. Similarly, OCID-Ref [20] only provides boxes without segmentation and graspannotations. We highlight that even though OCID-Ref could be used as a source of referring expres-sions for OCID-VLG, we chose to develop our own, as OCID-Ref expressions lack rich object andrelation vocabulary, lack absolute relations and inherit corrupted labels from OCID dataset. Graspsynthesis datasets are either in object-level and not consider clutter [55, 54], or include clutteredscenes but do not annotate referring expressions [18, 19]. Additionally, most mentioned datasets donot explicitly consider ambiguities in the scene. In OCID-VLG, we use attributes and relations torefer to objects only in cases of ambiguity, aiming to prevent overfitting in superficial object-relationcorrespondences that exist in the training data. Finally, as we use the CLEVR [35] engine to generateexpressions, our dataset is further equipped with symbolic parses of both the visual and the languagemodalities, which could be potentially utilized for training models with additional supervision.4 MethodThis section discusses the proposed task, the implemented baselines and the details of our end-to-endmodel, CROG.4Problem Formulation. Referring grasp synthesis considers the problems of referring imagesegmentation and grasp synthesis in tandem. Given an RGB image I∈RH×W×3, a depth imageD∈RH×Wand a natural language expression Tthat refers to a unique object in the scene, the goalis to predict both a pixel-wise segmentation mask of the referred object M∈ {0,1}H×W, as well asa grasp configuration G= (x∗, y∗, θ∗, l∗), where: (W, H )the image resolution, (x∗, y∗)the centerof the optimal grasp in pixel coordinates, θ∗the gripper’s rotation in camera reference frame and l∗the gripper width in pixel coordinates. As in [1, 2, 19, 58], Gis recovered from three masks: a graspscalar quality mask Q∈ {0,1}H×W, such that: (x∗, y∗) = argmax(x,y)Q(x, y), an angle maskΘ∈ {−πx,π2}H×Wand a width mask L∈ {0,1}H×W, such that: θ∗= Θ( x∗, y∗), l∗=L(x∗, y∗).4.1 Multi-Stage BaselinesWe design multi-stage baselines which integrate existing large-scale vision-language models withpretrained grasp synthesis models. To that end, we decompose the overall task into two stages,namely: a) a grounding function f(I, T) =Mthat segments the referred object from the RGBimage, and b) a grasp synthesis function g(I, D, M ) =Gthat isolates the object in the RGB-Dinputs I, D using the segmentation mask Mto produce a grasp G.Two-stage grounding with CLIP. The grounding function can be further decomposed into twosteps, first using an off-the-shell detector [60] for object proposal generation fsegm(I) ={Mn}Nn=1,and then ranking the Nsegmented object proposals according to their similarity with the languagedescription frank(Mn, T) =argmaxnS(Mn, T), where S(·)denotes a similarity metric between asegmented RGB object image Mnand the language input T. In practise, we implement Svia CLIP’s[24] pretraining objective, i.e. computing cosine similarity between visual features from passing Mnto CLIP’s visual encoder and a sentence-wide embedding of Tfrom CLIP’s text encoder.Mask-conditioned grasp synthesis. The grasp synthesis function is implemented via a pretrainednetwork [1, 2] which receives the input pair I, D and generates grasp Gvia decoding the masksQ,Θ, L. To isolate the desired object, given in mask M, we experimentally find that the best practiseis to element-wise multiply the mask with the RGB-D inputs before passing to the network.4.1.1 Zero-shot baselinesFirst, existing powerful pretrained models are experimented to assess their zero-shot performancein our challenging setup, including two multi-stage variants that use GR-ConvNet [1] pretrained onthe Jacquard [54] dataset as grasp synthesis network, but differ in grounding as follows:SAM+CLIP. In this setup, we use the Segment Anything (SAM) [60] model for instance seg-mentation and CLIP [24] for ranking as explained above. Similar to [61], we find that passingboth a cropped box and the mask of the object to CLIP’s feature extractor and ensembling the finalsimilarities boosts performance.GLIP+SAM. For this variant, we use a large pretrained visual grounding model, GLIP [62], for pre-dicting a bounding box around the relevant object of interest given the natural language command,and prompt the SAM [60] model to get a tight segmentation mask for the object of interest.4.1.2 Supervised baselines with vanilla CLIP integrationSecond, we implemented a vanilla integration of CLIP in grasp synthesis models pretrained inOCID-Grasp [19], which aims to explore whether a model using segmentation and grasping trainedon OCID scenes can be extended to our setup without any language conditioning. Similar to theSAM+CLIP setup, CLIP is used as a ranker and the supervised model as both a segmenter andgrasper. We experiment with the two state-of-the-art models in OCID-Grasp, Det-Seg [19] and SSG[58], which can both provide both segmentation and grasp predictions for an input RGB-D scene.4.2 CROGWe propose a model for estimating both the segmentation mask Mand the grasp masks Q,Θ, Linan end-to-end fashion (see the overview in Fig. 2). Our architecture is an extension of CRIS [27],5a model originally proposed for adapting CLIP to do pixel-level segmentation. CRIS achieves thiswith four main components: a) unimodal encoders for image and text, b) a multi-modal FPN neck ,c) across-attention vision-language decoder , and d) projectors for text-to-pixel contrastive loss .Image ImageEncoderTextEncoder"Pick the food box infront of the ball"Multi-ModalFPNCross-AttentionDecoderImage-T extProjectorText-Pixel Contrastive LossSmooth L1 LossQuality Mask Angle Mask Width MaskGraspProjectorsGraspProjectorsGraspProjectorsTarget MaskPixel-wise multi-modal featuresImageTextFigure 2: An overview of CROG.The unimodal encoders are initialized fromCLIP’s visual and text encoders [24], but uti-lize visual feature maps from 2th-4th stagesof CLIP’s ResNet-50 visual encoder Fv2∈RH8×W8×C2,Fv3∈RH16×W16×C3,Fv4∈RH32×W32×C4, and considers both the sentence-level Fs∈RC′and the token embeddingsFt∈RK×C, where CandC′are the feature di-mensions and Kthe language sentence length.Visual feature maps Fv2, Fv3, Fv4and the sentence embedding Fsare fused in feature pyramidstyle [63] via the multi-modal neck, in order to generate pixel-wise multi-modal representationsFm∈RN×Cof the image-text pair, where N=H16·W16.The vision-language decoder uses a standard Transformer decoder [64] to let the multi-modal fea-tures Fmcross-attend with all token embeddings Ftand produce embeddings Fc∈RN×C. Thisprocess adaptively propagates semantic information from text to visual features. Finally, FcandFsare projected into the same space, where contrastive alignment is performed via computing a binarycross entropy loss over the dot-product of the projected embeddings, pushing them together in theregions of the ground truth segmentation mask.To obtain a mask prediction, the projected features ˆFc,ˆFsare dot-producted with sigmoid activation,reshaped to N=H4·W4and upsampled to original image size. To adapt for grasp synthesis task, wepropose to further add three projectors for generating grasp masks Q,Θ, Land supervise them withsmooth L1-loss from the ground truth grasps, in parallel to the contrastive alignment loss of CRIS.5 Experimental ResultsThis section evaluates our dataset using multi-stage baselines and compares them to our CROGmodel. Also, we conducted ablation studies to analyze the performance improvements and presentthe results of our robot experiments.Implementation We initialize the vision and text encoders with the ResNet-50 and BERT weightsfrom CLIP [24]. Input images are resized to 416×416, and texts are BPE-tokenized [64, 65]. Themaximum length of input texts is set to 20. We train in multiple GPUs using the Adam optimizerwith an initial learning rate 1e−4, that decays to 0.1over 35 epochs.Evaluation metrics For grounding, we report referring image segmentation (RIS) [27, 66] metricsIoUandPrecision@X .IoUcalculates averaged intersection over union for the predicted segmenta-tion and ground truth masks, while Precision@X measures the percentage of predictions with IoUhigher than a threshold X∈ {0.5,0.6,0.7,0.8,0.9}. For referring grasp synthesis (RGS) , Jacquardindex J@N [54, 19, 58] is presented, measuring the percentage of top-N grasp predictions that havean angle difference within 30◦and higher than 0.25 IoUwith the ground truth grasp rectangle.5.1 OCID-VLG ResultsThe grounding and grasp synthesis results are reported on the test split of OCID-VLG, containing17.7ksamples from held-out scenes of OCID. The test set contains seen objects but in novel sceneconfigurations, resulting in unseen referring expressions. Results for zero-shot and supervised base-lines and our CROG are in Table 2. Results show that baselines based on GR-ConvNet [1] pretrainedon Jacquard [54], transfer poorly in OCID-VLG, even with ground truth grounding (28.7% J@1).We find that the GR-ConvNet-based grasper tends to prefer edges, due to the top-down perspectiveof Jacquard images, which is not the case in OCID-VLG.6MethodRIS RGSIoU Pr@50 Pr@60 Pr@70 Pr@80 Pr@90 J@1 J@ AnyGT-Grounding†- - - - - - 28.7 70.2GT-Masks + CLIP†35.0 35.0 35.0 35.0 35.0 35.0 11.9 26.8SAM + CLIP†25.7 29.3 28.5 27.4 22.7 9.1 7.2 12.7GLIP + SAM†30.3 34.7 34.1 33.5 28.6 11.7 10.7 21.8Det-Seg + CLIP 29.0 27.2 20.9 17.5 17.2 16.0 28.1 39.2SSG + CLIP 33.6 35.6 35.6 35.5 35.5 32.8 33.5 34.7CROG (ours) 81.1 96.9 94.8 87.2 64.1 16.4 77.2 87.7Table 2: Comparison results in OCID-VLG test split. Baselines with†use GR-ConvNet [1] pretrained on Jacquard [54]. GT denotes the useof ground truth data for providing an upper bound of performancegiven perfect segmentation masks or grounding.Zero-shot baselines scorebelow 30% in both tasks,as due to the CLIP-basedranking methodology, falsepositives in grounding leadto incorrect grasping, re-gardless if the predictedgrasp is correct for the mis-segmented object. Replac-ing large zero-shot mod-els with supervised meth-ods trained in OCID-Grasp(Det-Seg, SSG), offers amarginal improvement in grounding (+3.6% in IoU), but significant in grasping (+23.2% in J@1),while still low overall (39.2% J@ Any). This indicates that even in presence of an OCID-specificgrasper, the ranking methodology of vanilla CLIP integration significantly limits the grounding per-formance.The proposed CROG overcomes such limitations by fine-tuning grounding and grasp synthesis to-gether on top of CLIP, and surpasses previous methods with a large margin (+47.5% in IoU and+43.7% in J@1), offering a much more competitive baseline for the proposed OCID-VLG dataset.5.2 Ablation StudiesFigure 3: Grasp synthesis ablationsaccording to the type of input refer-ring expression.Ablation studies are conducted to explore: a) the distributionof error according to referring expression type, and b) perfor-mance improvements of each main CROG component.Referring Expression Type We first decompose the per-formance according to the type of the input expression andcompare the analytical results of our model with the best-performing baseline, SSG+CLIP (see Fig. 3). We observe thatthe CLIP baseline struggles with grounding spatial conceptssuch as relations and locations (less than 30% J@1), due tothe loss of spatial information introduced by the segment-then-rank pipeline. On the contrary, CROG is trained with densepixel-text token alignment via the cross-attention decoder, andis capable of spatial grounding and robust across all types.Method RIS- IoU RGS- J@1CROG 81.1 77.2- w/o CLIP init 73.9 71.0- w/o contrastive - 73.4- w/o grasp loss 79.3 -- w/o decoder 78.2 72.3Table 3: CROG ablation study.Effect of CROG components We ablate the three main char-acteristics of CROG: a) initializing from CLIP, b) combiningcontrastive with grasp mask decoding tasks in a single objec-tive, and c) dense text-pixel alignment with a decoder. Resultsare summarized in Table 3. All components are contributingto CROG’s performance, where CLIP initialization is the mostvital. Crucially, the removal of the contrastive or grasp lossresults in a decrease in CROG’s grasping or grounding capability. This highlights the knowledgetransfer between the two tasks that justifies our selection of a multi-task training objective.5.3 Robot ExperimentsWe conducted experiments with a simulated and a real robot, where we want to evaluate the per-formance of our model in the context of an interactive table cleaning task. Our setup consists of adual-arm robot with two UR5e manipulators with parallel jaw grippers and a Kinect sensor.During each experiment, we randomly place 5-12 objects on a tabletop and provide language in-struction to the robot to pick a target object and place it in a predefined container position.7Setup Fruit Food Box Food Can Mug Marker Cereal Flashlight OverallIsolated (#Trials) 10 10 8 4 6 10 2 50Isolated - Ground.Acc. 10 (100%) 8 (80%) 5 (63%) 2 (50%) 5 (83%) 6 (60%) 2 (100%) 38 (76%)Isolated - Succ.Rate 10 (100%) 5 (50%) 4 (50%) 1 (25%) 5 (83%) 4 (40%) 2 (100%) 31 (62%)Cluttered (#Trials) 10 10 8 4 6 10 2 50Cluttered - Ground.Acc. 8 (80%) 5 (50%) 3 (38%) 1 (25%) 5 (83%) 6 (60%) 2 (100%) 30 (60%)Cluttered - Succ.Rate 5 (50%) 4 (40%) 3 (38%) 1 (25%) 3 (50%) 5 (50%) 0 (0%) 21 (42%)Table 4: Results of robot experiments Gazebo, where Ground.Acc de-notes the number of trials where the target object is segmented cor-rectly and Succ.Rate the number of successfully completed trials.We place objects in twoscenarios, namely: a)isolated , where objectsare scattered across theworkspace, and b) clut-tered , where we closelypack objects together. Wenote that in each scene weinclude distractor objectsof the same category as the queried object. Our setup and example trials are shown in Fig. 4, whilemore qualitative results are provided in Appendix B.Pick the closest sponge Get the green markerGrasp the food can in front of the black flashlight Pass the leftmost markerFigure 4: Interactive table cleaning trials in Gazebo(top) and real robot (bottom) , in isolated (left column)and cluttered (right column) scenes.We conducted 50 trials per scenario inthe Gazebo simulator [67], using objectmodels from 7 categories of OCID-VLG,some as exact instances and others withdifferent attributes. Object list, metrics,and recorded results are shown in Ta-ble 4. The robot achieves grounding ac-curacy of 76% (38/50) and success rate of62% (31/50) in isolated and 60% (30/50),42% (21/50) in cluttered scenes respec-tively, with grounding failures mostly forobjects that are not similar in appear-ance to OCID-VLG categories. For realrobot experiments, we initialize six uniquescenes, three for isolated and three forcluttered scenarios, and provide graspinginstructions for a total of 34 trials. Wehighlight that this test is more challenging,as the object set used for experiments hasno overlap with OCID-VLG instances. In isolated scenes, the grounding accuracy is 65% and thesuccess rate 23.9%, while in cluttered it is 60% and20.0%respectively. In both experiments, themodel is able to ground attribute concepts for unseen instances (e.g. “white and blue box” ) anddisambiguate objects based on spatial relations. Grounding failures are usually due to highly oc-cluded objects in the scene, especially if multiple distractor objects are present. Several failure casesin cluttered scenes are due to collisions during motion execution, nevertheless, the 4-DoF grasp iscorrectly predicted. Detailed experiments are shown in the supplementary video.6 Conclusion, Limitations and Future WorkThis paper presents OCID-VLG, a new dataset for language-guided 4-DoF grasp synthesis in clutter,offering the first benchmark that connects language instructions with grasping in an end-to-endfashion. Further, we propose CROG, a CLIP-based end-to-end model as a solution. Extensiveexperimental comparisons and ablation studies validated the effectiveness of CROG over previousmethods, and set a competitive baseline for our dataset. Overall, this research offers valuable insightsinto language-guided grasp synthesis and lays the foundation for future advancements in this field.However, we found that CROG is limited when grounding concepts that lie outside the trainingdistribution. We attribute this to the pretraining-finetuning strategy, which trades off the zero-shotcapacity of CLIP pretraining in favor of the dense finetuning tasks. Future work will explore methodsto efficiently learn the dense decoding tasks while maintaining better zero-shot grounding capabilityof CLIP. Finally, since CROG only considers RGB information, we would like to investigate whetherfurther fusing depth data alongside RGB aids in grasp synthesis.8Acknowledgements: This work is supported by EU H2020 project “Enhancing Healthcare withAssistive Robotic Mobile Manipulation (HARMONY , 101017008)”, as well as from Google Deep-Mind through the Research Scholar Program for the “Continual Robot Learning in Human-CenteredEnvironments” project.References[1] S. Kumra, S. Joshi, and F. Sahin. Antipodal robotic grasping using generative residual convo-lutional neural network. 2020 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 9626–9633, 2019.[2] D. Morrison, P. Corke, and J. Leitner. Closing the loop for robotic grasping: A real-time,generative grasp synthesis approach. ArXiv , abs/1804.05172, 2018.[3] R. Chen, N. Gao, N. A. Vien, H. Ziesche, and G. Neumann. Meta-learning regrasping strategiesfor physical-agnostic objects. ArXiv , abs/2205.11110, 2022.[4] H. Kasaei and M. Kasaei. Mvgrasp: Real-time multi-view 3d object grasping in highly clut-tered environments. Robotics and Autonomous Systems , 160:104313, 2023.[5] D. Z. Chen, A. X. Chang, and M. Nießner. Scanrefer: 3d object localization in rgb-d scansusing natural language. ArXiv , abs/1912.08830, 2019.[6] H. Liu, A. Lin, X. Han, L. Yang, Y . Yu, and S. Cui. Refer-it-in-rgbd: A bottom-up approachfor 3d visual grounding in rgbd images. 2021 IEEE/CVF Conference on Computer Vision andPattern Recognition (CVPR) , pages 6028–6037, 2021.[7] C. Mauceri, M. Palmer, and C. Heckman. Sun-spot: An rgb-d dataset with spatial referring ex-pressions. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) ,pages 1883–1886, 2019.[8] J. Luketina, N. Nardelli, G. Farquhar, J. N. Foerster, J. Andreas, E. Grefenstette, S. Whiteson,and T. Rockt ̈aschel. A survey of reinforcement learning informed by natural language. InInternational Joint Conference on Artificial Intelligence , 2019.[9] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. B. Amor. Language-conditioned imitation learning for robot manipulation tasks. ArXiv , abs/2010.12083, 2020.[10] M. Shridhar and D. Hsu. Interactive visual grounding of referring expressions for human-robotinteraction. ArXiv , abs/1806.03831, 2018.[11] D. Misra, J. Sung, K. Lee, and A. Saxena. Tell me dave: Context-sensitive grounding of naturallanguage to manipulation instructions. The International Journal of Robotics Research , 35:281– 300, 2014.[12] J. Hatori, Y . Kikuchi, S. Kobayashi, K. Takahashi, Y . Tsuboi, Y . Unno, W. K. H. Ko, and J. Tan.Interactively picking real-world objects with unconstrained spoken language instructions. 2018IEEE International Conference on Robotics and Automation (ICRA) , pages 3774–3781, 2017.[13] Y . Chen, R. Xu, Y . Lin, and P. A. Vela. A joint network for grasp detection conditioned onnatural language commands. 2021 IEEE International Conference on Robotics and Automation(ICRA) , pages 4576–4582, 2021.[14] V . Blukis, R. A. Knepper, and Y . Artzi. Few-shot object grounding and mapping for naturallanguage robot instruction following. In Conference on Robot Learning , 2020.[15] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring expres-sions. ArXiv , abs/1608.00272, 2016.9[16] Z. Chen, P. Wang, L. Ma, K.-Y . K. Wong, and Q. Wu. Cops-ref: A new dataset and task oncompositional referring expression comprehension. 2020 IEEE/CVF Conference on ComputerVision and Pattern Recognition (CVPR) , pages 10083–10092, 2020.[17] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik.Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentencemodels. International Journal of Computer Vision , 123:74–93, 2015.[18] H. Fang, C. Wang, M. Gou, and C. Lu. Graspnet-1billion: A large-scale benchmark for generalobject grasping. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , pages 11441–11450, 2020.[19] S. Ainetter and F. Fraundorfer. End-to-end trainable deep neural network for robotic grasp de-tection and semantic segmentation from rgb. 2021 IEEE International Conference on Roboticsand Automation (ICRA) , pages 13452–13458, 2021.[20] K.-J. Wang, Y .-H. Liu, H.-T. Su, J.-W. Wang, Y .-S. Wang, W. H. Hsu, and W.-C. Chen. Ocid-ref: A 3d robotic dataset with embodied language for clutter scene grounding. In North Amer-ican Chapter of the Association for Computational Linguistics , 2021.[21] A. Zeng, A. S. Wong, S. Welker, K. Choromanski, F. Tombari, A. Purohit, M. S. Ryoo, V . Sind-hwani, J. Lee, V . Vanhoucke, and P. R. Florence. Socratic models: Composing zero-shot mul-timodal reasoning with language. ArXiv , abs/2204.00598, 2022.[22] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. R. Florence, and A. Zeng. Codeas policies: Language model programs for embodied control. ArXiv , abs/2209.07753, 2022.[23] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan,P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. J. Henighan,R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler,M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,and D. Amodei. Language models are few-shot learners. ArXiv , abs/2005.14165, 2020.[24] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell,P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models fromnatural language supervision. In International Conference on Machine Learning , 2021.[25] M. Shridhar, L. Manuelli, and D. Fox. Cliport: What and where pathways for robotic manipu-lation. ArXiv , abs/2109.12098, 2021.[26] C. Tang, D. Huang, L. Meng, W. Liu, and H. Zhang. Task-oriented grasp prediction withvisual-language inputs. ArXiv , abs/2302.14355, 2023.[27] Z. Wang, Y . Lu, Q. Li, X. Tao, Y . Guo, M. Gong, and T. Liu. Cris: Clip-driven referring imagesegmentation. In Proceedings of the IEEE/CVF conference on computer vision and patternrecognition , pages 11686–11695, 2022.[28] Y . Qiao, C. Deng, and Q. Wu. Referring expression comprehension: A survey of methods anddatasets. IEEE Transactions on Multimedia , 23:4426–4440, 2020.[29] H. de Vries, F. Strub, A. P. S. Chandar, O. Pietquin, H. Larochelle, and A. C. Courville. Guess-what?! visual object discovery through multi-modal dialogue. 2017 IEEE Conference onComputer Vision and Pattern Recognition (CVPR) , pages 4466–4475, 2016.[30] J. Mao, J. Huang, A. Toshev, O.-M. Camburu, A. L. Yuille, and K. P. Murphy. Generationand comprehension of unambiguous object descriptions. 2016 IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 11–20, 2015.10[31] S. Kazemzadeh, V . Ordonez, M. andre Matten, and T. L. Berg. Referitgame: Referring toobjects in photographs of natural scenes. In Conference on Empirical Methods in NaturalLanguage Processing , 2014.[32] J. Mao, J. Huang, A. Toshev, O.-M. Camburu, A. L. Yuille, and K. P. Murphy. Generationand comprehension of unambiguous object descriptions. 2016 IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 11–20, 2015.[33] T.-Y . Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll ́ar, and C. L.Zitnick. Microsoft coco: Common objects in context. In European Conference on ComputerVision , 2014.[34] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descrip-tions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39:664–676, 2014.[35] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick.Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. 2017IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , pages 1988–1997,2016.[36] R. Liu, C. Liu, Y . Bai, and A. L. Yuille. Clevr-ref+: Diagnosing visual reasoning with refer-ring expressions. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR) , pages 4180–4189, 2019.[37] J. Mao, J. Huang, A. Toshev, O. Camburu, A. L. Yuille, and K. Murphy. Generation andcomprehension of unambiguous object descriptions. CoRR , abs/1511.02283, 2015. URLhttp://arxiv.org/abs/1511.02283 .[38] L. Yu, P. Poirson, S. Yang, A. C. Berg, and T. L. Berg. Modeling context in referring expres-sions. CoRR , abs/1608.00272, 2016. URL http://arxiv.org/abs/1608.00272 .[39] A. Rohrbach, M. Rohrbach, R. Hu, T. Darrell, and B. Schiele. Grounding of textual phrasesin images by reconstruction. CoRR , abs/1511.03745, 2015. URL http://arxiv.org/abs/1511.03745 .[40] R. Luo and G. Shakhnarovich. Comprehension-guided referring expressions. CoRR ,abs/1701.03439, 2017. URL http://arxiv.org/abs/1701.03439 .[41] Z. Yang, B. Gong, L. Wang, W. Huang, D. Yu, and J. Luo. A fast and accurate one-stageapproach to visual grounding. CoRR , abs/1908.06354, 2019. URL http://arxiv.org/abs/1908.06354 .[42] A. Sadhu, K. Chen, and R. Nevatia. Zero-shot grounding of objects from natural languagequeries. In Proceedings of the IEEE International Conference on Computer Vision , pages4694–4703, 2019.[43] G. Feng, Z. Hu, L. Zhang, and H. Lu. Encoder fusion network with co-attention embedding forreferring image segmentation. 2021 IEEE/CVF Conference on Computer Vision and PatternRecognition (CVPR) , pages 15501–15510, 2021.[44] S. Yang, G. Li, and Y . Yu. Cross-modal relationship inference for grounding referring ex-pressions. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) ,pages 4140–4149, 2019.[45] M. Li and L. Sigal. Referring transformer: A one-step approach to multi-task visual grounding.ArXiv , abs/2106.03089, 2021.[46] G. Luo, Y . Zhou, X. Sun, L. Cao, C. Wu, C. Deng, and R. Ji. Multi-task collaborative networkfor joint referring expression comprehension and segmentation. 2020 IEEE/CVF Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 10031–10040, 2020.11[47] P. Achlioptas, A. Abdelreheem, F. Xia, M. Elhoseiny, and L. J. Guibas. Referit3d: Neurallisteners for fine-grained 3d object identification in real-world scenes. In European Conferenceon Computer Vision , 2020.[48] L. Zhao, D. Cai, L. Sheng, and D. Xu. 3dvg-transformer: Relation modeling for visual ground-ing on point clouds. 2021 IEEE/CVF International Conference on Computer Vision (ICCV) ,pages 2908–2917, 2021.[49] S. Huang, Y . Chen, J. Jia, and L. Wang. Multi-view transformer for 3d visual grounding. 2022IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 15503–15512, 2022.[50] A. Kamath, M. Singh, Y . LeCun, I. Misra, G. Synnaeve, and N. Carion. Mdetr - modulated de-tection for end-to-end multi-modal understanding. 2021 IEEE/CVF International Conferenceon Computer Vision (ICCV) , pages 1760–1770, 2021.[51] L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y . Zhong, L. Wang, L. Yuan, L. Zhang, J.-N.Hwang, K.-W. Chang, and J. Gao. Grounded language-image pre-training. 2022 IEEE/CVFConference on Computer Vision and Pattern Recognition (CVPR) , pages 10955–10965, 2021.[52] J. Li, R. R. Selvaraju, A. D. Gotmare, S. R. Joty, C. Xiong, and S. C. H. Hoi. Align beforefuse: Vision and language representation learning with momentum distillation. In NeuralInformation Processing Systems , 2021.[53] R. Newbury, M. Gu, L. Chumbley, A. Mousavian, C. Eppner, J. Leitner, J. Bohg, A. Morales,T. Asfour, D. Kragic, et al. Deep learning approaches to grasp synthesis: A review. arXivpreprint arXiv:2207.02556 , 2022.[54] A. Depierre, E. Dellandr ́ea, and L. Chen. Jacquard: A large scale dataset for robotic graspdetection. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 3511–3516, 2018.[55] Y . Jiang, S. Moseson, and A. Saxena. Efficient grasping from rgbd images: Learning using anew rectangle representation. 2011 IEEE International Conference on Robotics and Automa-tion, pages 3304–3311, 2011.[56] C. Eppner, A. Mousavian, and D. Fox. Acronym: A large-scale grasp dataset based on sim-ulation. In 2021 IEEE International Conference on Robotics and Automation (ICRA) , pages6222–6227. IEEE, 2021.[57] J. Varley, C. DeChant, A. Richardson, J. Ruales, and P. Allen. Shape completion enabledrobotic grasping. In 2017 IEEE/RSJ international conference on intelligent robots and systems(IROS) , pages 2442–2447. IEEE, 2017.[58] Y . Xu, M. M. Kasaei, S. H. M. Kasaei, and Z. Li. Instance-wise grasp synthesis for roboticgrasping. ArXiv , abs/2302.07824, 2023.[59] M. Suchi, T. Patten, and M. Vincze. Easylabel: A semi-automatic pixel-wise object annota-tion tool for creating robotic rgb-d datasets. 2019 International Conference on Robotics andAutomation (ICRA) , pages 6678–6684, 2019.[60] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead,A. C. Berg, W.-Y . Lo, P. Doll ́ar, and R. Girshick. Segment anything. arXiv:2304.02643 , 2023.[61] S. Subramanian, W. Merrill, T. Darrell, M. Gardner, S. Singh, and A. Rohrbach. Reclip: Astrong zero-shot baseline for referring expression comprehension. In Annual Meeting of theAssociation for Computational Linguistics , 2022.12[62] L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y . Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition , pages 10965–10975, 2022.[63] T.-Y . Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramidnetworks for object detection. In Proceedings of the IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , July 2017.[64] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polo-sukhin. Attention is all you need. Advances in neural information processing systems , 30,2017.[65] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever, et al. Language models areunsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[66] H. Ding, C. Liu, S. Wang, and X. Jiang. Vision-language transformer and query generationfor referring segmentation. 2021 IEEE/CVF International Conference on Computer Vision(ICCV) , pages 16301–16310, 2021.[67] N. P. Koenig and A. Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) (IEEE Cat. No.04CH37566) , 3:2149–2154 vol.3, 2004.13A OCID-VLG VocabularyWe visualize a word cloud of the concept vocabulary of OCID-VLG in Fig. 5, while the full attributeconcept catalog is given if Fig. 6. Besides common sub-phrases such ”box”, ”food”, ”product” ,the wordcloud demonstrates that the most frequent concepts used to disambiguate objects are spa-tial predicates, both as pair-wise relations ( ”front”, ”right”, etc. ) and as absolute location (e.g.”leftmost”, ”closest” ). Certain object names (e.g. ”kleenex”, ”tissues”, ”cereal” ) appear morefrequently, as those are the objects that are most commonly ambiguous in OCID scenes, hence theyspawn a lot of expressions referring to them. Finally, colors and brand names appear also frequently,as they are the most common discriminating attribute between objects of the same category.Figure 5: Wordcloud of OCID-VLG V ocabularyThe number of unique concepts per concept type, as well as the total number including paraphrasesare presented in Table 5. Paraphrases include synonyms (e.g. ”Coca-Cola”, ”Coke” ) as well asdifferent phrasings of relations (e.g. ”left of”, ”to the left side of” ).Concept Num.Unique Num.TotalCategory 30 55Color 27 27Instance 31 93Relation 9 24Location 4 8Table 5: Number of concepts in OCID-VLGReferring expressions might use instance-level names, attributes, relations, locations or combina-tions of all the above to disambiguate objects. We study the frequency of referring expressions onthe OCID-VLG data splits in Table 6. Most frequent type is name (which includes a lot of varietyType Train Validation TestName 20678 3014 5809Attribute 2739 348 781Relation 20501 2792 5769Location 9306 1285 2672Mixed 9997 1230 2718Table 6: Number of referring expressions in OCID-VLG organized by typein concepts such as brand, flavor etc.) with pair-wise relations following closely. Spatial relationscan always refer to the target uniquely by querying for a relation to a neighbouring object. Location14ID class label color material special01 apple apple_1 red organic NaN12 apple apple_2 green organic NaN23 ball ball_1 blue plastic NaN34 ball ball_2 yellow plastic rugby ball45 ball ball_3red andwhiteplasticpolka ball,ball withspots,ball with dots56 banana banana_1 yellow organic NaN67 bell_pepper bell_pepper_1 red organic NaN78 binder binder_1 green plastic NaN89 bowl bowl_1 blue ceramic NaN910 cereal_box cereal_box_1 red paperTopas box,Topas cerealbox,Topas cornflakes,Topas cereal1011 cereal_box cereal_box_3white andbluepaperMega Pack box,MegaPack cereal box,MegaPack cereal,Mega Packcorn flakes1112 cereal_box cereal_box_4 brown paperChoco Krispiesbox,Choco Krispiescereal box,ChocoKrispies box,ChocoKrispies corn flakes1213 cereal_box cereal_box_5green andredpaperChocos box,Chocoscereal box,Chocosbox,Chocos corn flakes1314 coffee_mug coffee_mug_1 black ceramic mug with evolution logo1415 coffee_mug coffee_mug_2 white ceramic plain mug1516 flashlight flashlight_1 black metal NaN1617 food_bag food_bag_2 transparent plastic lentil bag,bag with lentils1718 food_bag food_bag_3red andwhiteplasticpasta bag,pennebag,spaghettibag,spaghetti pennebag,bag with pasta1819 food_bag food_bag_4 white plasticrice bag,Langkorn ricebag,clever rice bag,bagwith rice1920 food_box food_box_1 dark blue paperBarillabox,tagliatelle,spaghettibox2021 food_box food_box_2yellow andgreenpaperchocolate bananabox,choco-bananas,chocolatebanana box,box withchoco-bananaFigure 6: Full attribute catalog of OCID-VLGand mixed follow at about half frequency, while color is last, as several objects in OCID share colorbetween different instances of the same category.B Qualitative ResultsWe visualize predicted masks and grasp poses from the implemented baselines and the proposedCROG model in Fig. 7. We include two examples per referring expression type for test scenesof OCID-VLG dataset. Zero-shot baselines based on pretrained GR-ConvNet provide poor graspproposals, while supervised baselines + CLIP (Det-Seg, SSG) are constrained by the ranking errorsof CLIP. Due to segment-then-rank pipeline, spatial information about other objects is lost whenconsidering only the mask of a single object. As a result, CLIP-based baselines struggles withgrounding spatial relations. CROG is robust across referring expression types.In Fig. 8, we visualize outputs of the CROG model during real robot experiments. The plots includepredicted mask and grasp proposal, as well as the three decoded masks from CROG’s grasp projec-tors (quality, angle and width masks). It should be noted that the corresponding input command isshown atop each image.15Text command : Grab the food can product SSG+CLIP DetSeg+CLIP CROG (Ours) Input image Text command : Pass the ball with spots SAM+GLIP GT SAM+GLIP (a) Results in referring expressions by name.Text command : Grasp the blue ball SSG+CLIP DetSeg+CLIP CROG (Ours) Input image Text command : Grab the red soda drink SAM+GLIP GT SAM+GLIP (b) Results in referring expressions by attribute.Text command : Pick the cereal box package that is to the rear right of the food box SSG+CLIP DetSeg+CLIP CROG (Ours) Input image Text command : Pass the cereal box package that is right of the plain mug SAM+GLIP GT SAM+GLIP (c) Results in referring expressions by relation.Text command : Grasp the nearest shampoo product SSG+CLIP DetSeg+CLIP CROG (Ours) Input image Text command : Grasp the left marker SAM+GLIP GT SAM+GLIP (d) Results in referring expressions by location.Text command : Grasp the keyboard that is to the right of the brown kleenex box SSG+CLIP DetSeg+CLIP CROG (Ours) Input image Text command : Pick the noodles on the rear left of the white and green marker SAM+GLIP GT SAM+GLIP (e) Results in referring expressions by a mix of concepts.Figure 7: Qualitative results in OCID-VLG test scenes.16Figure 8: Qualitative results in real robot experiments.17 |
fNLBmtyBiC | A Bayesian approach to breaking things: efficientlypredicting and repairing failure modes via samplingCharles DawsonDepartment of Aeronautics and AstronauticsMIT United Statescbd@mit.eduChuchu FanDepartment of Aeronautics and AstronauticsMIT United Stateschuchu@mit.eduAbstract: Before autonomous systems can be deployed in safety-critical appli-cations, we must be able to understand and verify the safety of these systems.For cases where the risk or cost of real-world testing is prohibitive, we propose asimulation-based framework for a) predicting ways in which an autonomous sys-tem is likely to fail and b) automatically adjusting the system’s design to preemp-tively mitigate those failures. We frame this problem through the lens of approxi-mate Bayesian inference and use differentiable simulation for efficient failure caseprediction and repair. We apply our approach on a range of robotics and controlproblems, including optimizing search patterns for robot swarms and reducing theseverity of outages in power transmission networks. Compared to optimization-based falsification techniques, our method predicts a more diverse, representativeset of failure modes, and we also find that our use of differentiable simulationyields solutions that have up to 10x lower cost and requires up to 2x fewer iter-ations to converge relative to gradient-free techniques. Accompanying code andvideo can be found at https://mit-realm.github.io/breaking-things/ .Keywords: Automatic design tools, root-cause failure analysis, optimization-as-inference1 IntroductionFrom power grids to transportation and logistics systems, autonomous systems play a central, andoften safety-critical, role in modern life. Even as these systems grow more complex and ubiquitous,we have already observed failures in autonomous systems like autonomous vehicles and power net-works resulting in the loss of human life [1]. Given this context, it is important that we be able toverify the safety of autonomous systems prior to deployment; for instance, by understanding thedifferent ways in which a system might fail and proposing repair strategies.Human designers often use their knowledge of likely failure modes to guide the design process;indeed, systematically assessing the risks of different failures and developing repair strategies is animportant part of the systems engineering process [2]. However, as autonomous systems grow morecomplex, it becomes increasingly difficult for human engineers to manually predict likely failures.In this paper, we propose an automated framework for predicting, and then repairing, failure modesin complex autonomous systems. Our effort builds on a large body of work on testing and veri-fication of autonomous systems, many of which focus on identifying failure modes or adversarialexamples [3, 4, 5, 6, 7, 8], but we identify two major gaps in the state of the art. First, many existingmethods [4, 5, 9, 7] use techniques like gradient descent to search locally for failure modes; how-ever, in practice we are more interested in characterizing the distribution of potential failures, whichrequires a global perspective. Some methods exist that address this issue by taking a probabilisticapproach to sample from an (unknown) distribution of failure modes [6, 10]. However, these meth-ods suffer from a second major drawback: although they can help the designer predict a range of7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: An overview of our method for predicting and repairing failure modes in autonomoussystems, shown here handling connectivity failures in a drone swarm.failure modes, they do not provide guidance on how those failure modes may be mitigated; they arealso inefficient due to their use of gradient-free inference methods.We address all of these drawbacks to develop a framework, shown in Fig. 1, for predicting and repair-ing failure modes in autonomous systems. Taking inspiration from inference-based methods [10, 6],we make three novel contributions:1. We reframe the failure prediction problem as a probabilistic sampling problem, allowingus to avoid local minima and quickly find high likelihood, high severity failure modes.2. We exploit the duality between failure prediction and repair to not only predict likely failuremodes but also suggest low-cost repair strategies.3. We employ automatic differentiation to take advantage of fast gradient-based samplingalgorithms, substantially improving performance relative to the state of the art.We demonstrate our approach on several large-scale robotics and control problems: swarm for-mation control with up to 10 agents, multi-robot search with up to 32 agents, an electric powertransmission network with up to 57 nodes and 80 transmission lines. We compare our approachwith baselines for both failure mode prediction and repair, showing that our framework outper-forms the state-of-the-art and scales well beyond the capabilities of existing tools, converging tosolutions that are up to 10x lower cost while requiring less than half as many iterations. Wealso demonstrate that the repair strategies developed using our approach can be deployed onhardware for the multi-robot search example, and a software implementation can be found athttps://mit-realm.github.io/breaking-things/ .2 Related WorkModel-based verification Early approaches to model-based verification and fault identificationused symbolic logical models of the system under test to formally reason about failures using (com-putationally expensive) satisfiability (SAT) solvers or search [11, 12]. More recent approaches tomodel-based failure mode identification have used mathematical models of the system dynamics toframe the problem as a reachability [13] or optimal control [14] problem. The challenge in applyingthese methods is that it may be difficult or impossible to construct a symbolic model for the sys-tem under test. In this work, we seek to retain the interpretability of model-based techniques whileeliminating the requirement for a fully symbolic model, using automatically differentiable com-puter programs instead. Such models are comparatively easy to construct [8] and can even includeimplicitly differentiable components such as the solutions to optimization problems [15].Adversarial testing Verification using adversarial optimization has been applied in both model-based [7, 5, 9] and model-free [3] contexts. Generally speaking, model-based adversarial techniquesuse gradient-based optimization to locally search for adversarial examples that cause a system fail-ure, then use gradient-based optimization to locally repair those failures [7, 5]. The drawback ofthese methods is that they are inherently local and typically yield only a single adversarial coun-2terexample. Model-free approaches [3] can avoid the issue of local minima by using zero-orderblack-box optimization techniques but incur additional computational cost as a result. In contrast,we take a probabilistic approach where sample-efficient gradient-based sampling algorithms can beused to escape local minima and efficiently generate multiple potential failure cases [16].Inference Ours is not the first work to take a probabilistic approach to failure mode prediction.O’Kelly et al. develop an end-to-end verification pipeline for autonomous vehicles based on adap-tive importance sampling [10], and Zhou et al. develop a failure mode prediction system basedon gradient-free Markov Chain Monte Carlo (MCMC) [6]. We take inspiration from these worksand make two key improvements. First, these existing works focus exclusively on predicting likelyfailure modes — they do not include a method for mitigating these failure modes once discovered— while we combine failure mode prediction with repair by recognizing the duality between theseproblems. Second, we use differentiable simulation to replace inefficient zero-order MCMC algo-rithms with fast gradient-based samplers, resulting in a substantial performance improvement.There is also a complimentary body of work on algorithms for rare-event simulation [17, 18] thatprovide extensions to MCMC-based sampling algorithms that perform well even when we seek tosimulate extremely rare failure cases. Our framework is completely compatible with rare-event sim-ulation strategies commonly used in Sequential Monte Carlo (SMC), and we view the incorporationof these methods into our framework as a promising direction for future work.3 Assumptions and Problem StatementAt the heart of our approach is a simulation model of the system under test, parameterized by twodistinct sets of parameters. The design parameters x∈ X ⊆ Rnare those parameters that thesystem designer may vary, while the exogenous parameters y∈ Y ⊆ Rmare those that may varyuncontrollably (due to environmental variation, adversarial disturbance, the actions of other agents,etc.). Together, xandydefine the behavior of the system ξ∈Ξ(e.g. a trace of all relevant statesand observables) through the simulator function , denoted ξ=S(x, y). In addition, we assume that acost function J(ξ)is known; i.e. Jreflects the property that the system designer wishes to verify. Asummary of our notation is provided in Table 1 in the appendix; we will use “designs” and “failuremodes” interchangeably with “design parameters” and “exogenous parameters”, respectively.Assumption 1: SandJare programs that we can automatically differentiate almost everywhere.This setting is more general than the case when an explicit mathematical model is known, but lessgeneral than a black-box setting (although many black-box systems in robotics can be automaticallydifferentiated [19]). Assumption 2: xandyare continuous random variables with known, automat-ically differentiable, and potentially unnormalized prior probability densities px,0(x)andpy,0(y). Itmay be counter-intuitive to model the design parameters as random variables, but this choice allowsus to capture constraints on the design space by assigning low probability to infeasible designs. Theprior distribution for ycan be either estimated from historical data or constructed to reflect con-straints on the operational domain. We restrict our focus to the continuous-parameter case, but ourapproach can be extended to handle mixed discrete parameters using block-resampling [20].In this context, failure prediction entails finding exogenous parameters y∗that, for some given x,lead to a high cost. To ensure that predicted failures are plausible, we must also find values for y∗with high prior likelihood. To achieve this balance, we define the metric of risk-adjusted costJr(x, y) =J◦S(x, y) + log py,0(y) (1)where ◦denotes function composition. Failure prediction is thus the problem of finding parametersy∗that lead to a high risk-adjusted cost; moreover, since it is likely that Jrwill have multiple localminima with respect to y(i.e. multiple likely failure modes), we wish to sample a set¶y∗1, . . . , y∗ny©of such failures. To generate this set, we replace deterministic optimization y∗= arg minyJr(x, y)with sampling from the unnormalized pseudo-posterior (in the sense defined in [21]).y∗∼p(y∗|x)∝py,0(y∗)eJ◦S(x,y∗)(2)3Likewise, the failure repair problem seeks to find design parameters x∗that both have high priorlikelihood (thus respecting the designer’s prior beliefs about the design space) and result in a low costacross a range of anticipated failure modes; i.e. sampling from the unnormalized pseudo-posteriorx∗∼p(x∗|y∗1, . . . , y∗ny)∝px,0(x∗)e−PiJ◦S(x∗,y∗i)/ny(3)4 Approach: Adversarial InferenceThe primary challenge in sampling from these failure and repair distributions is that they will natu-rally shift as the design changes. Once the design has been updated to account for the current set ofpredicted failures, those failures will likely be out of date. To address this issue, we define a noveladversarial sampling method to alternate between sampling improved failure modes {y∗1, . . . , y∗n}and then sampling improved design parameters x∗to repair those failure modes, thus improving therobustness of the design while maintaining an up-to-date set of failure modes.Our algorithm (detailed in Algorithm 1) proceeds in the style of a sequential Monte Carlo algo-rithm [18]. We begin by initializing nypotential failure modes and nxcandidate designs sampledfrom their respective prior distributions. In each of Krounds, we first sample nxnew candidatedesigns from distribution (3) to repair the current set of predicted failure modes. We then selectthe design that performs best against all currently-predicted failures and sample nynew sets of ex-ogenous parameters (each representing a potential failure mode) from distribution (2). To samplefrom distributions (2) and (3), we use nxandnyparallel executions of a Markov chain Monte Carlo(MCMC) sampler. In order to handle potential multimodality in the design and failure space, weinclude optional tempering to interpolate between the prior and target distributions [18].Our proposed adversarial inference algorithm can accept any MCMC sampling algorithm as a sub-routine, either gradient-free or gradient-based. In our experiments, we compare the results of usingboth gradient-free (random-walk Metropolis-Hastings, or RMH) and gradient-based (Metropolis-adjusted Langevin algorithm, or MALA [22]); both of these are included in the appendix. Empiri-cally, gradient-based samplers typically converge faster, particularly on high-dimensional problems,but in cases where a differentiable simulator is not available, a gradient-free sampler will suffice.We use MCMC for the sampling subroutine in all of our experiments, but our framework is alsocompatible with other approximate inference methods (e.g. variational inference).Algorithm 1: Failure prediction and repair using gradient-based samplingInput: Population sizes nx,ny; rounds K; substeps M; stepsize τ; tempering λ1, . . . , λ K.Output: Robust design x∗and a set of failures¶y∗1, . . . , y∗ny©with high risk-adjusted cost.1Initialize candidate designs [x]0={x1, . . . , x nx}0sampled from px,0(x)2Initialize candidate failures [y]0=y1, . . . , y ny0sampled from py,0(y)3fori= 1, . . . , K do4 px,i(x) :=px,0(x)e−λk/nyPy∈[y]i−1J◦S(x,y)5 [x]i←Sample ([x]i−1, M, τ, p x,i)▷Update candidate designs using predicted failures6 py,i(y) :=py,0(y)eλkminx∈[x]i−1J◦S(x,y)▷Update failure predictions for new best design7 [y]i←Sample ([y]i−1, K, τ, p y,i)8return [y]K,x∗= arg maxx∈[x]Npx,i(x) ▷Choose best designOn a theoretical level, any MCMC sampler will be sound so long as the resulting Markov chain isergodic and satisfies detailed balance [23]. Unfortunately, there can be a large gap between asymp-totic theoretical guarantees and practical performance. First, if the target distribution is multimodaland the modes are well-separated, then MCMC algorithms may be slow to move between modes,yielding a biased sampling distribution. To mitigate this effect, we include a tempering schedule0≤λ1≤. . .≤λK≤1to interpolate between the prior and target distributions and run multipleMCMC instances in parallel from different initial conditions. Empirically, we find that tempering isnot always needed, but we include it for completeness.4The second potential practical challenge arises from the continuity and differentiability (or lackthereof) of the simulator and cost function J◦S. Although gradient-based MCMC samplers likeMALA remain sound so long as the target distribution is continuously differentiable almost every-where (i.e. discontinuous or non-differentiable on a set of measure zero), in practice performancemay suffer when the target distribution has large discontinuities. Because of these issues, we designour method to be compatible with either gradient-based or gradient-free sampling algorithms, andwe compare the results of using both methods in Section 6.The final practical consideration is that although the stochasticity in our sampling-based approachcan help us explore the design and failure spaces, we incur a degree of sub-optimality as a result.When using gradient-based sampling, we have the option to reduce this sub-optimality by “quench-ing” the solution: switching to simple gradient descent (e.g. using MALA for the first 90 roundsand then gradient descent on the last 10 rounds). In practice, we find that quenching can noticeablyimprove the final cost without compromising the diversity of predicted failure modes.5 Theoretical AnalysisOur prediction-and-repair framework can work with both gradient-free or gradient-based samplingsubroutines, but it is important to note that gradients, when available, often accelerate convergence.To support this observation, we provide non-asymptotic convergence guarantees for the gradient-based version of our algorithm, drawing on recent results in Ma et al. [16].To make these guarantees, first assume that JisL-Lipschitz smooth. Second, assume that the logprior distributions logpy,0andlogpx,0arem-strongly convex outside a ball of finite radius R. Thefirst assumption is hard to verify in general and does not hold in certain domains (e.g. rigid contact),but it is true for most of our experiments in Section 6. The second is easy to verify for commonpriors (e.g. Gaussian and smoothed uniform). Let d= max (dim x,dimy)be the dimension of thesearch space and ε∈(0,1)be a convergence tolerance in total variation (TV) distance.Theorem 5.1. Consider Algorithm 1 with the stated assumptions on smoothness and log-concavity.Ifm > L andτ=‹OÄ(dlnL/(m−L) + ln 1 /ε)−1/2d−1/2ä, then sampling each round with TVerror≤εrequires at most M≤‹Od2ln1εsteps.Since convergence time for each round of prediction and mitigation scales only polynomially withthe dimension of the search space, our method is able to find more accurate failure predictions (andthus better design updates) than gradient-free methods with the same sample budget.Proof sketch Ma et al. [16] show that gradient-based MCMC enjoys fast convergence on non-convex likelihoods so long as the target likelihood is strongly log-concave in its tails (i.e. outsideof a bounded region). It would be unrealistic to assume that the cost J(x, y)is convex, but wecan instead rely on the strong log-concavity of the prior to dominate sufficiently in the tails andregularize the cost landscape. A formal proof is included in the appendix.6 Experimental ResultsThere are two questions that we must answer in this section: first, does reframing this problemfrom optimization to inference lead to better solutions (i.e. lower cost designs and predicted failuresthat accurately cover the range of possible failures)? Second, does gradient-based MCMC withdifferentiable-simulation offer any benefits over gradient-free MCMC when used in our approach?We benchmark our algorithm on a range of robotics and industrial control problems. We compareagainst previously-published adversarial optimization methods [7, 5] and compare the results ofusing gradient-based and gradient-free MCMC subroutines in our approach. We then provide ademonstration using our method to solve a multi-robot planning problem in hardware. The codeused for our experiments can be found at https://mit-realm.github.io/breaking-things/ .5Figure 2: Environments used in our experiments. (Left to right) Multi-agent search-evasion, forma-tion control, power dispatch, aircraft ground collision avoidance, and manipulation by pushing.Baselines We compare with the following baselines. DR: solving the design optimization problemwith domain randomization minxEy[Jr(x, y)].GD: solving the adversarial optimization problemminxmax yJr(x, y)by alternating between optimizing a population of nxdesigns and nyfailuremodes using local gradient descent, as in [7, 5, 24]. We also include two versions of our method,using both gradient-free ( RMH ) and gradient-based ( MALA ) MCMC subroutines. All methods aregiven the same information about the value and gradient (when needed) of the cost and prior likeli-hoods. The gradient-free version of our approach implements quenching for the last few rounds.Environments We use three environments for our simulation studies, which are shown in Fig. 2and described more fully in the appendix. Multi-robot search: a set of seeker robots must covera search region to detect a set of hiders. xandydefine trajectories for the seekers and hiders,respectively; failure occurs if any of the hiders escape detection. This environment has small (6seeker vs. 10 hider, dimx= 60 ,dimy= 100 ) and large (12 seeker vs. 20 hider, dimx= 120 ,dimy= 200 ) versions. Formation control: a swarm of drones fly to a goal while maintainingfull connectivity with a limited communication radius. xdefines trajectories for each robot in theswarm, while yparameterizes an uncertain wind velocity field. Failure occurs when the secondeigenvalue of the graph Laplacian is close to zero. This environment has small (5 agent, dimx= 30 ,dimy= 1280 ) and large (10 agent, dimx= 100 ,dimy= 1280 ) versions. Power grid dispatch:electric generators must be scheduled to ensure that the network satisfies voltage and maximumpower constraints in the event of transmission line outages. xspecifies generator setpoints and yspecifies line admittances; failures occur when any of the voltage or power constraints are violated.This environment has small (14-bus, dimx= 32 ,dimy= 20 ) and large (57-bus, dimx= 98 ,dimy= 80 ) versions. F16 GCAS: a ground collision avoidance system (GCAS) must be designedto prevent a jet aircraft, modeled with aerodynamic effects and engine dynamics, from crashing intothe ground. xdefines a control policy neural network ( dimx≈1.8×103) and ydefines the initialconditions ( dimy= 5).Pushing: a robot manipulator must push an object out of the way to reachanother object. Failure occurs if the object is knocked over while pushing. xdefines a planningpolicy network ( dimx≈1.2×103) and ydefines the unknown inertial and frictional properties ofthe object being pushed, as well as measurement noises ( dimy= 7). We implement our methodand all baselines in Python using JAX. All methods were run with the same population sizes andtotal sample budget, using hyperparameters given in the appendix.Solution quality For each environment, we first solve for an optimized design and a set of pre-dicted failure modes using each method. We then compare the performance of the optimal designon the predicted failure modes with the performance observed on a large test set of 105randomlysampled exogenous parameters. The results of this experiment are shown in Fig. 3.We find that both DR and GD often fail to predict failure modes that accurately cover the tail ofworst-case behaviors: in the formation and power grid examples, both DR and GD falsely indicatethat all predicted failures have been successfully repaired, despite a long tail of possible failures inboth cases. In the search example, adversarial GD is able to predict a set of useful failure modes, butDR fails to do so. Only our method (with both gradient-free and gradient-based MCMC) accuratelypredicts the worst-case performance of the optimized design.6(a) Formation, 5 agents (b) Search, 6 vs. 10 (c) Power grid, 14-bus (d) F16 GCAS(e) Formation, 10 agents (f) Search, 12 vs. 20 (g) Power grid, 57-bus (h) PushingFigure 3: A comparison of the cost of the optimal design on the predicted failure modes (red) and105randomly sampled test cases (blue).(a) Formation, 5 agents (b) Search, 6 vs. 10 (c) Power grid, 14-bus (d) F16 GCAS(e) Formation, 10 agents (f) Search, 12 vs. 20 (g) Power grid, 57-bus (h) PushingFigure 4: Convergence rates of gradient-based (orange) and gradient-free (blue) MCMC samplerswhen used as subroutines for Algorithm 1. Shaded areas show min/max range over 4 random seeds.In addition to comparing the quality of the predicted failure modes, we can also compare the per-formance and robustness of the optimized design itself. On the search problem, our method findsdesigns with slightly improved performance relative to GD (but not relative to DR, since DR is notoptimizing against a challenging set of predicted failure modes). On the formation problem, ourmethod is able to find substantially higher-performance designs than either baseline method. On thepower grid problem, our method finds designs that incur a higher best-case cost, since this problemincludes a tradeoff between economic cost and robustness, but our method’s designs are substantiallymore robust than the baselines, with much lighter tails in the cost distribution.We observe that DR sometimes finds solutions that achieve lower average cost than those foundby our method. We believe that this is due to DR optimizing against a less challenging failurepopulation. This suggests the possibility of combining a failure dataset (predicted using our method)with an average-case dataset (sampled randomly from the prior) during repair; we hope to explorethis and other adaptive strategies in future work.Benefits of differentiable simulation Although we have designed our method to be compatiblewith either gradient-based or gradient-free MCMC subroutines, we observe that gradient-based sam-plers tend to converge faster than their gradient-free counterparts. Fig. 4 plots the performance of thebest-performing design at each round against a static test set of 100 randomly sampled exogenousparameters for both gradient-based and gradient-free methods across all environments. Althoughthese methods perform similarly on the formation problem, we see a clear pattern in the formationcontrol, search-evasion, and power grid examples where gradient-based MCMC converges muchfaster, and this advantage is greater on higher-dimensional problems (second row), compensatingfor the additional time needed to compute the gradients (typically a 2-3x increase in runtime).Hardware experiments We deploy the optimized hider and seeker trajectories in hardware usingthe Robotarium multi-robot platform [25] (we use 3 seekers and 5 hiders, since we had difficulty7Figure 5: (Left) HW results for search-evasion with 5 hiders and 3 seekers, showing an initialsearch pattern (blue) and predicted failure modes (red). (Center) HW results for an optimized searchpattern leaves fewer hiding places. (Right, top) An initial manipulation policy knocks over theobject. (Right, bottom) The repaired manipulation policy pushes without knocking the bottle over.testing with more agents in the limited space). We first hold the search pattern (design parameters)constant and optimize evasion patterns against this fixed search pattern, yielding the results shownon the left in Fig. 5 where the hiders easily evade the seekers. We then optimize the search patternsusing our approach, yielding the results on the left where the hiders are not able to evade the seekers.We also deploy an optimized policy for the pushing problem to a Franka Research 3 7-DoF robotarm. Fig. 5 shows a failure when the unoptimized policy fails to account for the uncertain centerof mass of the bottle, as well as a successful execution with the repaired policy. Videos of allexperiments are provided in the supplementary materials.7 Discussion and ConclusionBefore sending any autonomous system out into the real world, it is important to understand how itwill behave in range of operational conditions, including during potential failures. In this paper, wehave presented a tool to allow the designers of autonomous systems to not only predict the ways inwhich a system is likely to fail but also automatically adjust their designs to mitigate those failures.We apply our framework in simulation studies to a range of robotics and industrial control problems,including multi-robot trajectory planning and power grid control. Our results show that, relative toexisting adversarial optimization methods, our novel sampling-based approach yields better pre-dictions of possible failure modes, which in turn lead to more robust optimized designs. We alsoshow empirically that, when it is possible to define a differentiable simulator, gradient-based MCMCmethods allow our method to converge more than twice as fast as gradient-free methods.7.1 LimitationsSince it would be prohibitively costly to search for failure cases in hardware experiments (especiallyif failures resulted in damage to the robot), our method is restricted to searching for failures insimulation. As such, it is limited to predicting only failures that are modeled by the simulator,excluding failures that could arise due to unmodeled effects. Practically, our method could be usedin conjunction with hardware testing by catching some failures earlier in the development processand reducing the cost of eventual hardware testing.A notable limitation of our approach is that it requires knowledge of the prior distribution of theexogenous disturbances y. Although this can be estimated in some cases (as in our experiments), inpractice there may be uncertainty about the nature of this distribution. To address this, future worksmight investigate distributionally robust extensions of Algorithm 1 (akin to distributionally robustoptimization methods [26]). Additional limitations are discussed in the appendix.8AcknowledgmentsC. Dawson is supported by the NSF GRFP under Grant No. 1745302. This work was partly sup-ported by the National Aeronautics and Space Administration (NASA) ULI grant 80NSSC22M0070,Air Force Office of Scientific Research (AFOSR) grant FA9550-23-1-0099, and the Defense Scienceand Technology Agency in Singapore. Any opinions, findings, and conclusions or recommendationsexpressed in this publication are those of the authors and do not necessarily reflect the views of thesponsors.References[1] University of Texas at Austin. The Timeline and Events of the February 2021 Texas ElectricGrid Blackouts. Technical report, University of Texas at Austin, July 2021.[2] R. Shishko. NASA Systems Engineering Handbook . Number 6105 in NASA SP. NationalAeronautics and Space Administration, Washington, D.C.?, 1995.[3] A. Corso, R. Moss, M. Koren, R. Lee, and M. Kochenderfer. A Survey of Algorithms forBlack-Box Safety Validation of Cyber-Physical Systems. Journal of Artificial IntelligenceResearch , 72:377–428, Oct. 2021. ISSN 1076-9757. doi:10.1613/jair.1.12716.[4] A. Corso and M. J. Kochenderfer. Interpretable Safety Validation for Autonomous Vehicles. In2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) , pages1–6, Sept. 2020. doi:10.1109/ITSC45102.2020.9294490.[5] P. Donti, A. Agarwal, N. V . Bedmutha, L. Pileggi, and J. Z. Kolter. Adversarially robust learn-ing for security-constrained optimal power flow. In Advances in Neural Information ProcessingSystems , volume 34, pages 28677–28689. Curran Associates, Inc., 2021.[6] Y . Zhou, S. Booth, N. Figueroa, and J. Shah. RoCUS: Robot Controller Understanding viaSampling. In 5th Annual Conference on Robot Learning , Nov. 2021.[7] C. Dawson and C. Fan. Robust Counterexample-guided Optimization for Planning from Dif-ferentiable Temporal Logic. 2022 IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS) , pages 7205–7212, Oct. 2022. doi:10.1109/IROS47612.2022.9981382.[8] C. Dawson and C. Fan. Certifiable Robot Design Optimization using Differentiable Program-ming. In Robotics: Science and Systems XVIII , volume 18, June 2022. ISBN 978-0-9923747-8-5.[9] S. Yaghoubi and G. Fainekos. Gray-box Adversarial Testing for Control Systems with Ma-chine Learning Component. HSCC 2019 - Proceedings of the 2019 22nd ACM InternationalConference on Hybrid Systems: Computation and Control , pages 179–184, Dec. 2018. doi:10.48550/arxiv.1812.11958.[10] M. O’ Kelly, A. Sinha, H. Namkoong, R. Tedrake, and J. C. Duchi. Scalable End-to-EndAutonomous Vehicle Testing via Rare-event Simulation. In Advances in Neural InformationProcessing Systems , volume 31. Curran Associates, Inc., 2018.[11] J. de Kleer and B. C. Williams. Diagnosing multiple faults. Artificial Intelligence , 32(1):97–130, Apr. 1987. ISSN 0004-3702. doi:10.1016/0004-3702(87)90063-4.[12] D. Benard, G. A. Dorais, E. Gamble, B. Kanefsky, J. Kurien, W. Millar, N. Muscettola,P. Nayak, N. Rouquette, K. Rajan, and P. Norvig. Remote Agent Experiment. Jan. 2000.[13] Y . Annpureddy, C. Liu, G. Fainekos, and S. Sankaranarayanan. S-TaLiRo: A tool for temporallogic falsification for hybrid systems. Lecture Notes in Computer Science (including subseriesLecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 6605 LNCS:254–257, 2011. ISSN 03029743. doi:10.1007/978-3-642-19835-9 21/COVER.9[14] G. Chou, Y . E. Sahin, L. Yang, K. J. Rutledge, P. Nilsson, and N. Ozay. Using control syn-thesis to generate corner cases: A case study on autonomous driving. IEEE Transactions onComputer-Aided Design of Integrated Circuits and Systems , 37(11):2906–2917, Nov. 2018.ISSN 02780070. doi:10.1109/TCAD.2018.2858464.[15] B. Amos and J. Z. Kolter. OptNet: Differentiable optimization as a layer in neural net-works. In Proceedings of the 34th International Conference on Machine Learning - Volume70, ICML’17, pages 136–145, Sydney, NSW, Australia, Aug. 2017. JMLR.org.[16] Y . A. Ma, Y . Chen, C. Jin, N. Flammarion, and M. I. Jordan. Sampling can be faster thanoptimization. Proceedings of the National Academy of Sciences of the United States of Amer-ica, 116(42):20881–20885, Oct. 2019. ISSN 10916490. doi:10.1073/PNAS.1820003116/-/DCSUPPLEMENTAL.[17] G. Rubino and B. Tuffin. Introduction to Rare Event Simulation. Rare Event Simulation usingMonte Carlo Methods , pages 1–13, Jan. 2009. doi:10.1002/9780470745403.CH1.[18] N. Chopin and O. Papaspiliopoulos. An Introduction to Sequential Monte Carlo . SpringerInternational Publishing, Cham, 2020. ISBN 978-3-030-47844-5.[19] T. Howell, S. Le Cleac’h, Z. Kolter, M. Schwager, and Z. Manchester. Dojo: A differentiablesimulator for robotics. arXiv preprint arXiv:2203.00806 , 2022.[20] D. van Ravenzwaaij, P. Cassey, and S. D. Brown. A simple introduction to Markov ChainMonte–Carlo sampling. Psychonomic Bulletin & Review , 25(1):143–154, Feb. 2018. ISSN1531-5320. doi:10.3758/s13423-016-1015-8.[21] P. Alquier, J. Ridgway, and N. Chopin. On the properties of variational approximations ofGibbs posteriors. Journal of Machine Learning Research , 17(236):1–41, 2016. ISSN 1533-7928.[22] G. O. Roberts and O. Stramer. Langevin Diffusions and Metropolis-Hastings Algorithms.Methodology And Computing In Applied Probability , 4(4):337–357, Dec. 2002. ISSN 1573-7713. doi:10.1023/A:1023562417138.[23] C. Geyer. Introduction to Markov Chain Monte Carlo . Chapman and Hall/CRC, May 2011.ISBN 978-0-429-13850-8. doi:10.1201/b10905-6.[24] H. Xu, Y . Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain. Adversarial Attacks andDefenses in Images, Graphs and Text: A Review. International Journal of Automation andComputing , 17(2):151–178, Apr. 2020. ISSN 1751-8520. doi:10.1007/s11633-019-1211-x.[25] S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt. TheRobotarium: Globally Impactful Opportunities, Challenges, and Lessons Learned in Remote-Access, Distributed Control of Multirobot Systems. IEEE Control Systems Magazine , 40(1):26–44, Feb. 2020. ISSN 1941-000X. doi:10.1109/MCS.2019.2949973.[26] H. Rahimian and S. Mehrotra. Distributionally Robust Optimization: A Review. Open Journalof Mathematical Optimization , 3:1–85, July 2022. ISSN 2777-5860. doi:10.5802/ojmo.15.[27] M. B. Cain, R. P. O’Neill, and A. Castillo. History of Optimal Power Flow and Formulations.Technical report, Federal Energy Regulatory Commission, 2012.[28] F. Capitanescu, J. L. Martinez Ramos, P. Panciatici, D. Kirschen, A. Marano Marcolini, L. Plat-brood, and L. Wehenkel. State-of-the-art, challenges, and future trends in security constrainedoptimal power flow. Electric Power Systems Research , 81(8):1731–1741, Aug. 2011. ISSN0378-7796. doi:10.1016/j.epsr.2011.04.003.[29] U.S. Department of Energy. Grid Optimization Competition.https://gocompetition.energy.gov/.10[30] P. L. Donti, D. Rolnick, and J. Z. Kolter. DC3: A learning method for optimization with hardconstraints, Apr. 2021.[31] R. D. Zimmerman, C. E. Murillo-S ́anchez, and R. J. Thomas. MATPOWER: Steady-StateOperations, Planning, and Analysis Tools for Power Systems Research and Education. IEEETransactions on Power Systems , 26(1):12–19, Feb. 2011. ISSN 1558-0679. doi:10.1109/TPWRS.2010.2051168.[32] P. Heidlauf, A. Collins, M. Bolender, and S. Bak. Verification Challenges in F-16 Ground Col-lision Avoidance and Other Automated Maneuvers. In EPiC Series in Computing , volume 54,pages 208–217. EasyChair, Sept. 2018. doi:10.29007/91x9.[33] O. So and C. Fan. Solving Stabilize-Avoid via Epigraph Form Optimal Control using DeepReinforcement Learning. In Robotics: Science and Systems XIX , volume 19, July 2023. ISBN978-0-9923747-9-2.11Table 1: Summary of notationx∈ X ⊆ RdxDesign parameters (controlled by systemdesigner)y∈ Y ⊆ RdyExogenous parameters (not controlled bydesigner)ξ∈Ξ⊆RdξBehavior of a system (e.g. the simulationtrace)S:X × Y 7→ ξSimulator model of the system’s behaviorgiven design and exogenous parametersJ: Ξ7→R Cost functionJr:X × Y 7→ R Risk-adjusted cost functionpx,0(x), py,0(y)Prior probability distributions for designand exogenous parametersSummary of notationTable 1 provides a summary of notation used in this paper.Sampling algorithmsAlgorithm 1 relies on an MCMC subrouting for sampling from probability distributions given a non-normalized likelihood. Algorithms 2 and 3 provide examples of gradient-based (Metropolis-adjustedLangevin, or MALA) and gradient-free (random-walk Metropolis-Hastings, or RMH), respectively.Algorithm 2: Metropolis-adjusted Langevin algorithm (MALA, [16, 22])Input: Initial x0, steps K, stepsize τ, density p(x).Output: A sample drawn from p(x).1fori= 1, . . . , K do2 Sample η∼ N(0,2τI) ▷Gaussian noise3 xi+1←xi+τ∇logp(xi) +η ▷ Propose next state4 Paccept←p(xi+1)e−||xi−xi+1−τ∇logp(xi+1)||2/(4τ)p(xi)e−||xi+1−xi−τ∇logp(xi)||2/(4τ)5 With probability 1−min(1 , Paccept ):6 xi+1←xi ▷Accept/reject proposal7return xKAlgorithm 3: Random-walk Metropolis-Hastings (RMH, [23])Input: Initial x0, steps K, stepsize τ, density p(x).Output: A sample drawn from p(x).1fori= 1, . . . , K do2 Sample η∼ N(0,2τI) ▷Gaussian noise3 xi+1←xi+η ▷ Propose next state4 Paccept←p(xi+1)e−||xi−xi+1||2/(4τ)p(xi)e−||xi+1−xi||2/(4τ)5 With probability 1−min(1 , Paccept ):6 xi+1←xi ▷Accept/reject proposal7return xK12Proof of Theorem 5.1We will show the proof for sampling from the failure distribution with likelihood given by Eq. (2);the proof for repair follows similarly. The log-likelihood for the failure distribution islogpy,0(y) +J(x, y) (4)The authors of [16] show that MALA enjoys the convergence guarantees in Theorem 5.1 so longas the target log likelihood is strongly convex outside of a ball of finite radius R(see Theorem 1 in[16]). More precisely, Ma et al. [16] give the boundM≤ OÇe40LR2L3/2(m−L)5/2d1/2ÅdlnL(m−L)+ ln1εã3/2åwhen step size is τ=OÄe−8LR2(m−L)1/2L−3/2(dlnL/(m−L) + ln 1 /ε)−1/2d−1/2ä.Since logpy,0(y)is assumed to be strongly m-convex, it is sufficient to show that as ||y|| → ∞ , thestrong convexity of the log-prior dominates the non-convexity in J(x, y).For convenience, denote f(y) =J(x, y)andg(y) = log py,0(y). We must first show that f(y) +g(y)is(m−L)-strongly convex, for which it suffices to show that f(y) +g(y)−(m−L)/2||y||2is convex. Note thatf(y) +g(y)−m−L2||y||2=f(y) +L2||y||2+g(y)−m2||y||2(5)g(y)−m2||y||2is convex by m-stong convexity of g, so we must show that the remaining term,f(y) +L/2||y||2, is convex. Note that the Hessian of this term is ∇2f(y) +LI. Since we haveassumed that JisL-Lipschitz smooth (i.e. its gradients are L-Lipschitz continuous), it follows thatthe magnitudes of the eigenvalues of ∇2fare bounded by L, which is sufficient for ∇2f(y) +LIto be positive semi-definite, completing the proof.AC Power Flow Problem DefinitionThe design parameters x= (Pg,|V|g, Pl, Ql)include the real power injection Pgand AC voltageamplitude |V|gat each generator in the network and the real and reactive power draws at eachloadPl, Ql; all of these parameters are subject to minimum and maximum bounds that we modelusing a uniform prior distribution px,0. The exogenous parameters are the state yi∈Rof eachtransmission line in the network; the admittance of each line is given by σ(yi)Yi,nom where σisthe sigmoid function and Yi,nom is the nominal admittance of the line. The prior distribution py,0is an independent Gaussian for each line with a mean chosen so thatR0−∞pyi,0(yi)dyiis equal tothe likelihood of any individual line failing (e.g. as specified by the manufacturer; we use 0.05inour experiments). The simulator Ssolves the nonlinear AC power flow equations [27] to determinethe state of the network, and the cost function combines the economic cost of generation cg(aquadratic function of Pg, Pl, Ql) with the total violation of constraints on generator capacities, loadrequirements, and voltage amplitudes:J=cg+v(Pg, Pg,min , Pg,max ) +v(Qg, Qg,min , Qg,max ) (6)+v(Pl, Pl,min, Pl,max ) +v(Ql, Ql,min, Ql,max ) (7)+v(|V|,|V|min,|V|max) (8)where v(x, xmin, xmax) =L([x−xmax]++ [xmin−x]+),Lis a penalty coefficient ( L= 100 inour experiments), and [◦]+= max( ◦,0)is a hinge loss.Efficient solutions to SCOPF are the subject of active research [28] and an ongoing competition runby the U.S. Department of Energy [29]. In addition to its potential economic and environmentalimpact [27], SCOPF is also a useful benchmark problem for 3 reasons: 1) it is highly non-convex,2) it has a large space of possible failures, and 3) it can be applied to networks of different sizes13to test an algorithm’s scalability. We conduct our studies on one network with 14 nodes and 20transmission lines (32 design parameters and 20 exogenous parameters) and one with 57 nodes and80 lines (98 design parameters, 80 exogenous parameters)The simulator Ssolves the nonlinear AC power flow equations [5, 30] for the AC voltage ampli-tudes and phase angles (|V|, θ)and the net real and reactive power injections (P, Q)at each bus(the behavior ξis the concatenation of these values). We follow the 2-step method described in [30]where we first solve for the voltage and voltage angles at all buses by solving a system of nonlinearequations and then compute the reactive power injection from each generator and the power injec-tion from the slack bus (representing the connection to the rest of the grid). The cost function Jis acombination of the generation cost implied by Pgand a hinge loss penalty for violating constraintson acceptable voltages at each bus or exceeding the power generation limits of any generator, asspecified in Eq. 8. The data for each test case (minimum and maximum voltage and power limits,demand characteristics, generator costs, etc.) are loaded from the data files included in the MAT-POWER software [31].This experiment can be run with the solve scacopf.py script in theexperiments/power systems directory.Search-Evasion Problem DefinitionThis problem includes nseek seeker robots and nhide hider robots. Each robot is modeled usingsingle-integrator dynamics and tracks a pre-planned trajectory using a proportional controller withsaturation at a maximum speed chosen to match that of the Robotarium platform [25]. The trajectoryxi(t)for each robot is represented as a Bezier curve with 5 control points xi,j,xi(t) =4Xj=0Ç4jå(1−t)4−jtjxi,jThe design parameters are the 2D position of the control points for the trajectories of the seekerrobots, while the exogenous parameters are the control points for the hider robots. The prior dis-tribution for each set of parameters is uniform over the width and height of the Robotarium arena(3.2 m×2 m).We simulate the behavior of the robots tracking these trajectories for 100 s with a discrete time stepof0.1 s(including the effects of velocity saturation that are observed on the physical platform), andthe cost function isJ=nhideXi=1Åfimint=t0,...,tnÅfiminj=1,...,n seekphide,i(t)−pseek,j (t)−rããwhere ris the sensing range of the seekers ( 0.5 mfor the nseek= 2case and 0.25 m for the nseek=3case);fimin(◦) =−1blogsumexp (−b◦)is a smooth relaxation of the element-wise minimumfunction where bcontrols the degree of smoothing ( b= 100 in our experiments); t0, . . . , t narethe discrete time steps of the simulation; and phide,i(t)andpseek,j (t)are the (x, y)position of thei-th hider and j-th seeker robot at time t, respectively. In plain language, this cost is equal to thesum of the minimum distance observed between each hider and the closest seeker over the course ofthe simulation, adjusted for each seeker’s search radius.This experiment can be run with the solve hide andseek.py script in theexperiments/hide andseek directory.Formation Control Problem DefinitionThis problem includes ndrones modeled using double-integrator dynamics, each tracking a pre-planned path using a proportional-derivative controller. The path for each drone is represented as aBezier, as in the pursuit-evasion problem.14The design parameters are the 2D position of the control points for the trajectories, while the ex-ogenous parameters include the parameters of a wind field and connection strengths between eachpair of drones. The wind field is modeled using a 3-layer fully-connected neural network with tanhsaturation at a maximum speed that induces 0.5 Nof drag force on each drone.We simulate the behavior of the robots tracking these trajectories for 30 swith a discrete time stepof0.05 s, and the cost function isJ= 10||COM T−COM goal||+ maxt1λ2(qt) + 10−2where COM indicates the center of mass of the formation and λ2(qt)is the second eigenvalue of theLaplacian of the drone network in configuration qt. The Laplacian L=D−Ais defined in termsof the adjacency matrix A={aij}, where aij=sijσ20(R2−d2ij),dijis the distance betweendrones iandj,Ris the communication radius, and sijis the connection strength (an exogenousparameter) between the two drones. The degree matrix Dis a diagonal matrix where each entry isthe sum of the corresponding row of A.This experiment can be run with the solve.py script in the experiments/formation2d directory.F16 GCAS Problem DefinitionThis problem is based on the ground collision avoidance system (GCAS) verification problem orig-inally posed in [32], where the challenge is to design a controller for an F16 jet aircraft that avoidscollision with the ground when starting from a range of initial conditions. We use the JAX imple-mentation [33] of the original F16 dynamics model published in [32]. This model has 15 states and4 control inputs, and it includes a nonlinear engine model and an approximate aerodynamics model.The original model was published with a reference GCAS controller that is not able to maintainsafety over the desired range of initial conditions (given in Table 4 in [32]). We supplement thisreference controller with a neural network controller that only activates below a specified altitudethreshold; the parameters of this network and the value of the altitude threshold represent our designparameters (approximately 1,800 total parameters, with a uniform prior over these parameters). Theexogenous parameters include the initial altitude, roll, pitch, roll rate, and pitch rate ( h, φ, θ, p, q ),drawn from Gaussian distributions:h∼ N(1500 ,200)ftφ∼ N(0, π/8)θ∼ N(−π/5, π/8)p∼ N(0, π/8)q∼ N(0, π/8)The cost function isJ= [200 −minh]+/10 +1TXt=1,...,TñÅφπã2+Åθπã2+απ2+Åβπã2ôwhere his altitude in feet, φis roll, θis pitch, αis angle of attack, βis sideslip angle, [·]+is theexponential linear unit and min is a soft log-sum-exp minimum. Empirically, J≥15indicatesthat the aircraft has crashed ( h= 0). The simulation is run for 15 swith timestep 0.01 sbut stoppedearly if the aircraft crashes or leaves the flight envelope where the aerodynamic model is accurate(−10◦≤α≤45◦and|β| ≤30◦).Pushing Problem DefinitionIn this problem, we model the task of pushing an obstructing object out of the way so that the robotcan grasp another object. The robot receives noisy observations of the height, radius, and position of15the obstructing object and uses this information to plan a pushing action (push height and force) thatcan move the object to the side. The robot must ensure, without knowing the frictional or inertialproperties of the object, that its push is just forceful enough to move the object without knocking itover.The design parameters include 1,200 parameters of a neural network used to predict push height andforce given noisy observations about the object. The exogenous parameters include the true heighth, true radius r, mass m, center-of-mass height hcom, and friction coefficient μbetween the objectand the ground, plus the noisy observations of the height and radius ˆhandˆr. We use a uniformparameters over the design parameters and the following priors for the exogenous parameters:h∼U(0.1,0.2)mr∼U(0.1,0.25)mm∼U(0.1,1.0)kgμ∼U(0.1,1.0)hcom∼U(0.1h,0.9h)ˆh∼ N(h,0.1)ˆr∼ N(r,0.1)The cost function isJ= [Xτ]++ [1−XF]+where [·]+is the ReLU function,Pτis the net moment applied to the object about its tipping point(defined as positive in the direction of tipping, so that negative moments are stable), andPFis thenet force in the pushing direction.The simulator that we use in this example is fairly simple. It models the object as a cylinder restingon a flat plane, and we make the assumption that a successful push (i.e. no tipping) is quasi-static.With this assumption, we can detect a successful push by computing the net force and momentapplied to the object (including static friction). The object will tip (failure) if the net moment aboutthe edge of the cylinder is positive, and the object will not move (also failure) if the net force iszero. The point of this experiment is to show how even a simplified model can predict and repaircertain failures, and that these repairs can transfer to hardware. There are certainly failures that thissimulator will not catch (e.g. surface irregularities in the table might catch the edge of the object andcause it to flip), and there are false positives that our simulator will detect (e.g. a near-failure wherethe transition from static to dynamic friction reduces the net moment and prevents tipping), but wehope that this shows that our method can still be useful with a low-fidelity model.HyperparametersTable 2 includes the hyperparameters used for each environment.Details on Hardware ExperimentsSearch-evasionThe search-evasion hardware experiment was implemented on the Robotarium platform, an open-access multi-agent research platform [25]. Trajectories for the hiders and seekers were plannedoffline using our method (with K= 100 rounds and M= 10 substeps per round, taking 41 s) andthen tracked online using linear trajectory-tracking controllers.PushingThe pushing experiment was implemented using a Franka Research 3 7-DOF robot arm and anIntel RealSense D415 RGBd camera. The camera was positioned over the robot’s workspace, and16the depth image was used to segment the target and obstructing objects, as well as estimating theheight and radius of each object. These estimates were passed to the planning neural network, usingweights trained using our method ( K= 100 ,M= 10 ). The planning network predicts a pushheight and force; the pushing action is executed using a Cartesian impedance tracking controller.Further LimitationsIn this paper, we restrict our attention to problems with continuous design and exogenous param-eters, since gradient-based inference methods realize the greatest benefit on problems with a con-tinuous domain. It is possible to extend our method to problems with mixed continuous-discretedomains by using a gradient-based sampling algorithm for the continuous parameters and a gradient-free sampler for the discrete parameters; we hope to explore the performance implications of thiscombination in future work.Although the sampling methods we use in this paper remain theoretically sound when the simulatorand cost landscape are discontinuous (as is the case in manipulation problems, for example), itremains to be seen what practical effects discontinuity might have on convergence rate and solutionquality.Finally, although we include details on how to use tempering with our approach, we found thattempering was not needed for convergence on any of the problems we studied; more work is neededto understand when tempering is necessary for convergence of MCMC-based algorithms on variousrobotics problems.Table 2: Hyperparameters used for each environment. nqis the number of quenching rounds; idenotes the round number in Alg. 1.Environment nxnyτ K M n qλFormation (5 agents) 5 5 10−350 5 5 e−5iFormation (10 agents) 5 5 10−350 5 5 e−5iSearch-evasion (6 seekers, 10 hiders) 10 10 10−2100 10 25 e−5iSearch-evasion (12 seekers, 20 hiders) 10 10 10−2100 10 25 e−5iPower grid (14-bus) 10 10 10−6forx100 10 10 e−5i10−2foryPower grid (57-bus) 10 10 10−6forx100 10 10 e−5i10−2foryF16 10 10 10−2forx100 5 0 e−10i10−4foryPushing 10 10 10−210 10 0 e−10i17 |
GsM2qJTAg- | Heteroscedastic Gaussian Processes and RandomFeatures: Scalable Motion Primitives with GuaranteesEdoardo Caldarelli1Antoine Chatalic2Adri `a Colom ́e1Lorenzo Rosasco2, 3, 4Carme Torras11Institut de Rob `otica i Inform `atica Industrial, CSIC – UPC, Barcelona, Spain2MaLGa Center – DIBRIS – Universit `a di Genova, Genoa, Italy3CBMM – Massachusets Institute of Technology, Cambridge, MA, USA4Istituto Italiano di Tecnologia, Genoa, ItalyCorrespondence to: ecaldarelli@iri.upc.eduAbstract: Heteroscedastic Gaussian processes (HGPs) are kernel-based, non-parametric models that can be used to infer nonlinear functions with time-varyingnoise. In robotics, they can be employed for learning from demonstration as mo-tion primitives, i.e. as a model of the trajectories to be executed by the robot.HGPs provide variance estimates around the reference signal modeling the tra-jectory, capturing both the predictive uncertainty and the motion variability. How-ever, similarly to standard Gaussian processes they suffer from a cubic complexityin the number of training points, due to the inversion of the kernel matrix. The un-certainty can be leveraged for more complex learning tasks, such as inferring thevariable impedance profile required from a robotic manipulator. However, suit-able approximations are needed to make HGPs scalable, at the price of potentiallyworsening the posterior mean and variance profiles. Motivated by these obser-vations, we study the combination of HGPs and random features, which are apopular, data-independent approximation strategy of kernel functions. In a the-oretical analysis, we provide novel guarantees on the approximation error of theHGP posterior due to random features. Moreover, we validate this scalable motionprimitive on real robot data, related to the problem of variable impedance learning.In this way, we show that random features offer a viable and theoretically soundalternative for speeding up the trajectory processing, without sacrificing accuracy.Keywords: Gaussian process regression, random features, motion primitives.1 IntroductionLearning from demonstration (LfD) is a broadly used technique to transfer skills from humans torobots in a flexible and intuitive way [1]. Within the context of robotics manipulation, LfD consistsof recording the movement of an arm performing a specific task multiple times. This data is thenused to fit a model of the trajectory, called a motion primitive in this context, which allows therobot to reproduce the skill of interest. One popular way to achieve this objective is by meansof the so-called Gaussian process (GP) regression [2]. Being a Bayesian model, a GP naturallyprovides a time-varying reference signal to be followed by the robot (the GP posterior mean ), andan uncertainty quantification (the GP posterior variance ) around the reference signal.When used for LfD, GPs are usually heteroscedastic (HGP), i.e., the variance of the noise corruptingthe recorded trajectories is not constant [3, 4]. As discussed by Arduengo et al. [5], such a time-varying noise variance is a key asset for modeling motion variability when HGPs are used as motionprimitives. The inconsistency of the human demonstrations, captured by the time-dependent noisevariance, is a form of epistemic uncertainty that is intrinsic to the task, i.e., it cannot be resolved byincreasing the number of human demonstrations. Moreover, the HGP posterior variance quantifiesthe input-dependent uncertainty on the predictions, which is relevant when the testing points are farfrom the training data due to gaps in the human demonstrations.7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Figure 1: The tasks involved in our experiments. In the first two pictures, the human is guiding therobot to perform: a free motion above a metal piece; the insertion of the end-effector in the trailinside the piece. In the last picture, the robot is pulling a bed sheet to remove the wrinkles.While being attractive for the aforementioned reasons, GP regression suffers from a cubic scalingin the number of training samples as it requires an inversion of the kernel matrix. For this reason,several approximation techniques have been deployed, that allow for a linear scaling in the numberof observations. In particular, these approximations can be grouped into variational ones and thespectral ones. The former methods are data-dependent and rely on a variational approximationof the posterior distribution of the GP [6, 7, 8]. Spectral approximations, on the other hand, aremost often data-independent, and can rely, e.g., on polynomial approximations [9, 10], a truncationof the kernel’s spectrum [11], or a randomized approximation of an integral representation of thekernel function. The latter technique includes the well known random features (RFs) [12], that havesuccessfully been used for GPs approximation [13, 14, 15, 16, 17]. In order to achieve a satisfactorymodel of the trajectory of interest, these approximations should not deteriorate the posterior meanand variance as these quantities fully specify the robot’s motion; when such guarantees can berigorously obtained, approximate GPs become an appealing motion primitive [11, 18, 19].Contributions In this work, we study the combination of vector-valued HGPs with RFs for fasttrajectory processing in LfD. In order to assess the reliability of the approximation scheme, weperform a theoretical analysis and propose novel bounds on the approximation error of the posteriormean and variance of the HGP. This is achieved by leveraging the deep connection between approx-imate GP regression and kernel ridge regression with RFs [20]. Moreover, given the well-studiedrelationship between the HGP posterior variance and the time-dependent robot stiffness employedinvariable impedance control (VIC) [21, 22, 23, 24], we further assess the quality of the approx-imation for the three VIC tasks described by Caldarelli et al. [24], and shown in Figure 1. Overall,we demonstrate that the combination of RFs and HGPs is theoretically sound, and significantlyimproves the running time of the motion primitive fitting, without sacrificing accuracy.2 Related WorkMotion primitives One of the most popular trajectory representations appearing in literature isgiven by the so-called probabilistic movement primitives (ProMPs) [25, 26, 27]. ProMPs modelthe trajectory at each time-step as a parametric weighted combination of basis functions, and aretherefore different from HGPs, which are non-parametric by definition. ProMPs can be connectedwith another class of trajectory models, namely dynamic movement primitves , as shown by Li et al.[28]. On the other hand, GP-based models exhibit strong analogies with the kernelized movementprimitive model introduced by Huang et al. [29]. In general, these approaches differ from GPregression as they also capture correlation between the different degrees of freedom (DOFs) of thetrajectory. However, GPs can also be extended to account for output correlations by leveragingsuitable coregionalization methods [30], based, e.g., on Gaussian mixture models [31].Spectral approximations Some theoretical properties of spectral approximations for ho-moscedastic, scalar-valued GPs have been explored in previous works. S ̈arkk ̈a and Pich ́e [9] proposea feature approximation based on different polynomial approximations of the RBF kernel function.They are able to prove uniform convergence of the kernel, mean and variance, but without conver-gence rates. Moreover, Solin and S ̈arkk ̈a [10] propose to compute the features based on the Fouriertransform of the Laplace operator. They prove convergence of the kernel function values and con-vergence of the posterior mean and variance, but with a dependency on the size of the domain of the2expansion of the Laplace operator. RFs are also widely used in the Bayesian optimization setting, asthey allow to provide scalable algorithms with theoretical guarantees [11, 19]. In this scenario, theuniform convergence of the approximate kernel function is of interest as the optimization algorithmsmay require sampling points in the whole domain of the function being optimized. Compared to ourwork, as we will discuss in Section 4, these bounds have a worse dependency on the number noftraining samples (which is not due to the uniformity of the bounds). Although approximate GPs areknown to suffer from the so-called variance starvation phenomenon, leading to poor estimates ofthe posterior variance when moving far from the training data [32], we stress that this is not prob-lematic in LfD where one is mainly interested in the interpolation of densely sampled trajectories,whose boundaries are delimited by the task duration.Variational methods Multiple methods based on variational approximations of the posteriordistribution of the GP have been proposed. They typically rely on the choice of inducing points , i.e.of a subset of the data summarizing the whole training set [6, 7, 8]. One fundamental convergenceresult for the Kullback-Leibler (KL) divergence between the variational and exact GP posterior wasproven by Burt et al. [18]. Moreover, Burt et al. [33, Proposition 1] showed that the error on the KLdivergence implies convergence of the variational posterior means and variances. However, theirbound depends on the value of the exact posterior variance, which makes the comparison with our re-sult difficult, as it will be clear from Section 4. Hence, recent approaches such as variational Fourierfeatures (VFFs) by Hensman et al. [8], which have shown good performance in practice, especiallyin the case of billions of datapoints, do not enjoy strong theoretical guarantees as RFs. Moreover,VFFs are limited to the Mat ́ern class of kernels (and the work by Dutordoir et al. [34] provides anextension to stationary kernels on the sphere), while RFs can be applied to any stationary kernel.3 Background: Approximating HPGs with Random FeaturesHGP regression for trajectory encoding LetXbe an input space, and x:X → Rbe ascalar-valued function. In LfD, the function xdepends on time, i.e., X=R≥0. A GP [2] specifiesa prior distribution over the function x, which depends on a mean function μ:X →Rand a kernelk:X × X → R. We say that xfollows a GP with mean μand kernel function kif for any vectorof time-steps t∈Rn, the vector x(t)of evaluations of xattfollows the multivariate normal dis-tribution x(t)∼ N (μ(t), K), where K∈Rn×nis defined as Ki,j=k(ti, tj). The mean functionis usually assumed to be 0, as the function values are standardized. The kernel function depends ona set of hyperparameters, θ, which can be fixed or inferred from data. In LfD, the time-dependenttrajectory to be encoded consists of dDOFs. If all DOFs are fully observed, each of them is usuallymodeled by an independent GP [35]. This type of estimation can be linked to a reproducing kernelHilbert space (RKHS) of vector-valued functions [30]. We assume to have access to y∈Rn, avector of noisy measurements of x(t). As shown by Arduengo et al. [35], assuming a constantnoise variance in LfD might severely limit the quality of the posterior prediction. Therefore,the noise is time-dependent, turning the GP into an HGP. The posterior distribution of an HGP,conditioned on the noise variance values at the training points and the function’s observations, isanalytically available. Let t∗be a testing point, and kt∗:= [k(t1, t∗), . . . , k (tn, t∗)]T∈Rn. Lastly,letΣnoise∈Rn×nbe the noise variance at the training points, and σ2noise,t∗be the noise variance atthe testing point. The posterior mean and variance of the associated HGP are, respectively, [35]μpost(t∗) =kTt∗(K+ Σ noise)−1y, (1)σ2post(t∗) =k(t∗, t∗) +σ2noise,t∗−kTt∗(K+ Σ noise)−1kt∗. (2)Random features Let(Ω,A, π)be a probability space over the sample space Ω, and let ψ:Ω× X → R. Random features (RFs) are a class of randomized methods that can be used toapproximate a kernel function admitting an integral representation of the formk(t, t′) =ZΩψ(ω, t)ψ(ω, t′)dπ(ω) (3)by dicretizing it. This is the case for standard kernels such as the ones belonging to theMat ́ern family or the radial-basis-function (RBF) kernel, as discussed by Rahimi and Recht3[12]. The expectation above is approximated by sampling mvectors (ωj)1≤j≤m∼π(ω). For ̃φ(t) :=1√m[ψ(ω1, t), . . . , ψ (ωm, t)]T, the kernel is approximated as ̃k(t, t′) := ̃φ(t)T ̃φ(t′). Forinstance for the case of a positive definite stationary kernel k(t, t′) =k(t−t′), Bochner’s theoremensures that khas a non-negative Fourier transform, which can thus be used in place of πwhiledefining ψto be a trigonometric function [12]. Rahimi and Recht [12] show that, for the RBF kernelk(t, t′) = σ2RBFexp−∥t−t′∥2/(2l2), the values of ωare s.t. ωi∼(2πl−2)−m2exp−l2|ω|2/2.Then, for buniformly sampled from [0,2π), they define ψ(ω, t):=√2σRBFcos(ωt+b). Toretrieve closed-form expressions for the posterior mean and variance of an HGP approximated withRFs (in the following referred to as RF-HGP), we can define the matrix ̃K∈Rn×nwith entries ̃Ki,j= ̃k(ti, tj). For any testing point t∗we define ̃kt∗= [ ̃k(t1, t∗), . . . , ̃k(tn, t∗)]T∈Rn. Lastly,letΣnoise andσ2noise,t∗be as in the previous paragraph. The posterior mean and variance of theassociated HGP are, respectively [35]μpost(t∗) = ̃kTt∗( ̃K+ Σ noise)−1y, (4)σ2post(t∗) = ̃k(t∗, t∗) +σ2noise,t∗− ̃kTt∗( ̃K+ Σ noise)−1 ̃kt∗. (5)We can observe that these equations are structurally the same as Equations (1) and (2), with theapproximated kernel ̃kreplacing the exact kernel k. Fixing the dimensionality of the RF vector,RF-HGPs allow for linear complexity in the number of samples when computing the posterior dis-tribution of the HGP, as the posterior expressions in Equations (4) and (5) can be simplified by thewell-known Woodbury identity [36]. Such an identity allows to perform matrix inversion in O(m3),m≪nbeing the dimension of the RF vector, as shown in Appendix A. Moreover, the kernel ap-proximation is data-independent, as the functions ψ(ωj,·)can be computed prior to seeing any data.4 Quantification of the Approximation ErrorsWe now analyse the proposed approximate HGPs in two different settings. In the first one, the hy-perparameters θof the kernel function (which typically contains at least a lengthscale parameter), aswell as the noise variance values at the training points, Σnoise, and at a testing point, σ2noise,t∗, are fixeda priori (oracle setup ). In the second setting, θ,Σnoise, and σ2noise,t∗are directly inferred from data(heuristic setup ). In the first case, the error introduced by the RF approximation can be rigorouslyquantified, as we will show in the remainder of this section. In the second case, convergence to theexact HGP can be attained empirically, and a suitable inference algorithm needs to be used, as wewill discuss in Section 5. Both in the oracle and in the heuristic setup, the RF-HGP’s posterior dis-tribution is computed at a set of testing time-steps T={t∗1, . . . , t∗T}, which are fixed a priori basedon the desired task duration. In this section, we derive error bounds on posterior mean and variancefor the RF-HGP, at the test points in T, within an oracle setup. These results guarantee that RFsare a suitable approximation strategy, that can be used in robotics scenarios. The proofs, reportedin Appendix B.2 and Appendix B.3, rely on techniques developed by Rudi and Rosasco [20].4.1 AssumptionsIn the following, we set ψj(·):=ψ(ωj,·), where the latter function is defined in Section 3. More-over, we make the following assumptions.Assumption 4.1. The kernel is bounded, i.e., k(t, t′)≤κ. Moreover, the kernel admits the integralrepresentation of Equation (3), in terms of a suitable function ψ(ω,·).Assumption 4.2. The entries of the RF vector are bounded, i.e., |ψj(t)| ≤α,∀j∈ {1, . . . , m },∀t.Assumption 4.3. The noise variance values in the diagonal matrix Σnoiseare bounded from below,i.e.∃γ∈(0,1]s.t.(Σnoise)ii≥γn,∀i∈ {1, . . . , n }.Assumption 4.1 is standard, and holds, e.g., for the RBF kernel ( κ=σ2RBF). Assumption 4.2 holds,e.g., for the RFFs described in Section 3 (with α=√2σRBF). Assumption 4.3 is obtained by replac-ing, e.g, K+ Σ noisein Equations (1) and (2) with K+ (Σ noise+γnI), a common practice aimed atavoiding numerical instabilities in matrix inversion. Lastly, for the RBF kernel and RFFs, κ=α2.4.2 Concentration of Posterior Mean and VarianceUnder these assumptions, we can derive the two main results of this paper, bounding the deviationof the posterior mean and variance of our RF-HGP w.r.t. its exact counterpart.4Figure 2: Time taken to compute the posterior distribution with an exact HGP and an RF-HGP, forfree motion (left), assembly (center) and bed-making (right) tasks, in an oracle setup. Median, the15thand85thpercentiles across all DOFs, 50 seeds.Theorem 4.4. Letδ∈(0,1], and mbe the dimension of the RF vector. Letm≥813+α2γlog(8α2γδ), and consider a vector-valued HGP with dindependent components. Letμpostand ̃μpostdenote its exact and RF-approximated posterior mean as of Equations (1)and(4).Letν= max 1≤i≤d1√nyi, where yidenotes the observations of DOF i. Lastly, let Tbe a set oftesting points. Under Assumptions 4.1 to 4.3, with probability at least 1−dδ,∀t∗∈ T, it holds that∥μpost(t∗)− ̃μpost(t∗)∥2≤να2s2dlog2|T |nδmγ2+√2dκν√γ2 log8κ2γδ(1 +α2/γ)3m+αs2 log8κ2γδγm.Theorem 4.5. Under the assumptions of Theorem 4.4, denoting σ2postand ̃σ2postrespectively the exactand RF-approximated posterior variances as of Equations (2)and(5), it holds with probability atleast 1−dδ,∀t∗∈ T,∥σ2post(t∗)− ̃σ2post(t∗)∥2≤α2s2dlog2|T |δm+κα2√γ+α4γs2dlog2|T |nδm+α3√2d√γ2 log8κ2γδ(1 +α2/γ)3m+αs2 log8κ2γδγm.Our bounds show that both the mean and the variance errors for a vector-valued RF-HGP scale asO(m−1/2). Our results cover the RF-based approximation of the homoscedastic GP as a specialcase taking Σnoise=βIfor some β≥0. Our results ensure that RFs are a viable data-independentapproximation method, and match the rates of other data-dependent strategies, such as the well-studied Nystr ̈om method [37] for GPs, as shown by Lu et al. [38]. Furthermore, by inspecting theproofs, and in particular the results in Appendix C, one can observe that the assumptions of the LfDframework (fixed testing points, scalar domain) translate in a dependency of the bounds on |T |.Nonetheless, our proofs can easily be adapted to the case of dense, multidimensional domains byreplacing Corollary C.2 with the uniform convergence result for RFs [39, 40] (rather than using apoint-wise bound and a union bound), which would leave the overall error in O(m−1/2).As mentioned in Section 2, our bounds strictly improve over the state-of-the art bounds by Mutnyand Krause [11]. For instance, Mutny and Krause [11, Proof of Theorem 5 and Proposition 1] in thecontext of homoscedastic GP approximated with deterministic features provide bounds of the formsupt∗|μpost(t∗)− ̃μpost(t∗)|≲ενn2σ−2andsupt∗|σ2post(t∗)− ̃σ2post(t∗)|≲εn3σ−2where εdenotesthe accuracy of a uniform bound on the kernel approximation, i.e. supx,y|k(x, y)− ̃k(x, y)| ≤ε.In our setting, when d= 1 and in the homoscedatic setting Σnoise=σ2Iandγ=σ2/n(cf.Assumption 4.3) we obtain a dependence in nof order O(n3/2logn)both for the posterior meanand variance which strictly improves on these bounds. This result can be derived, e.g., for the meanby observing that the rate with respect to γtranslates to an error upper bound with complexityO(n√logn) +O(n3/2logn) +O(n√logn).5 Empirical EvaluationIn this section, we empirically validate the RF-HGP. The implementation is built on the packagegpflow by Matthews et al. [41]1. We choose RFFs to approximate the RBF kernel, as reported1The open-source code for the experiments is available at https://github.com/LCSL/rff-hgp .5Figure 3: RMSE between the posterior means (left) and posterior variances (right) of an HGP andan RF-HGP, in an oracle setup, for three different tasks. Median, 15thand85thpercentiles acrossall DOFs, 50 seeds. The purple curves are the theoretical rates including the dependency on ν,γandm, and show that the overall rate in 1/√mmatches experimental result. The lowest purple curve isfor the bed making task, while the rates for assembly and proof-of-concept coincide.Figure 4: Incremental learning with missing chunks of data, in an oracle setup, for free motion(left), assembly (center) and bed-making (right) tasks. RF-HGP vs a Nystr ̈om approximation withfixed centers, to achieve the same complexity as RFs. Median, 15thand85thpercentiles across allDOFs, 50 seeds.in Section 3. In this work, we do not train the RFF parameters ω’s, as it is prone to overfitting [13,42], and annuls the Monte-Carlo interpretation of RFFs introduced in Section 3. To assess thecorrectness of the GP approximation, we process real demonstrations of different robotic tasks,obtained by means of kinesthetic teaching with a 7-DOF Barrett WAM manipulator. The trajectoriesare recorded while performing a movement in free Cartesian space, an assembly task, and a bed-making skill [24]. These trajectories are particularly interesting from the VIC point of view, andsummarized in Figure 1. Since the time-dependent impedance, or stiffness, of a manipulator can betuned based on the variance of the human demonstrations [21, 23, 22, 24], it is important that the RFapproximation does not deteriorate the quality of the posterior distribution of the HGP. The first taskexhibits varying boundaries in which the manipulator can move, while the latter two involve physicalconstraints on the robot’s motion. The assembly task requires a motion with low stiffness when thepieces being mounted are in contact. On the other hand, the bed-making skill requires the robotto stiffen up to remove wrinkles from the sheet’s surface, in spite of it opposing the motion of therobot’s end-effector. Each task uses a different number of human demonstrations, namely 6, 7 and5. While the number of demonstration is relatively small, the total number of training points is 1286for the proof-of-concept experiment, 1222 for the assembly task, and 864 for the bed-making skill,justifying the need for scalable GPs. Moreover, the number of samples might increase for longer ordenser trajectories. We consider an oracle setup in Section 5.1, where all hyperparameters and noisevariances are assumed to be known, and then discuss and evaluate in Section 5.2 the heuristic caseof unknown hyperparameters and noise variance.5.1 Oracle SetupAs a start, we consider the case in which the GP kernel hyperparameters θare known, and the noisevariance values Σnoise, σ2noise,t∗are provided by an oracle, as discussed in Section 4. This experimentis useful to assess how the prediction capability of RF-HGP deteriorates due to the kernel spectralapproximation, in view of the theoretical analysis carried out in Section 4. The oracle is given by anexact HGP trained with the EM algorithm by Kersting et al. [4], for 15 iterations per DOF.Offline learning In this setup, we are interested in comparing the prediction times (Figure 2) andaccuracy (Figure 3) of exact and approximated GP regression. By prediction time, we refer hereto the time taken to compute the posterior means and posterior variances at the testing points. Thehuman demonstrations are temporally aligned on the same interval, and processed all together, inan offline fashion. The prediction times, as shown in Figure 2, indicate that the RF-HGP is more6Figure 5: RMSE between the posterior means of an exact and an approximated HGP, in heuristicsetup, for free motion (left), assembly (center) and bed-making (right) tasks. Median, 15thand85thpercentiles, across all DOFs and 5 seeds.Figure 6: RMSE between the posterior variances of an exact and an approximated HGP, in heuristicsetup, for free motion (left), assembly (center) and bed-making (right) tasks. Median, 15thand85thpercentiles, across all DOFs and 5 seeds.efficient, as expected. Moreover, to asses the predictive accuracy, we compute the normalized root-mean-squared-error (RMSE) between the posterior means or variances of the HGP and RF-HGP:RMSE %mean=vuut1ddXi=1Pt∈T( ̃μpost,i(t)−μpost,i(t))2Pt∈Tμpost,i(t)2·100. (6)TheRMSE %on the variance is computed in the same way. This experiment allows us to observethat the error rate follows the expected rate in O(m−1/2), as shown in Figure 3, indicating that RFscan be used to speed up the inference without sacrificing accuracy in the posterior’s calculation.We can also observe that although all errors decrease with an increasing number of features, theassembly task exhibits the largest error among the three tasks. This is likely to be due to the RFapproximation being challenged by the sharp changes in the task variability.Incremental learning RFs are data-independent, and they can be used to perform incrementallearning of the posterior distribution of the HGP [43]. In an oracle setup, the posterior distributioncan be updated every time a new demonstration is gathered (see Appendix D). This approachis also possible, e.g., with the Nystr ̈om approximation of the kernel function, provided that theNystr ̈om centers are sampled from the first demonstration and not updated afterwards. Doingso would imply re-computing the correlation matrices between the centers and the whole set oftraining points, which has a linear complexity in the size of the training set. Keeping a fixed set ofNystr ̈om centers is not an issue in general for LfD, as the domain is scalar and all demonstrationsare temporally aligned on the time interval [0,1], and thus the first demonstration already coversthe whole domain. However, this fact does not hold if, for any reason, portions of the humandemonstrations are missing or invalid. In this case, the initialization of the centers might be poor,and the posterior calculation spoiled. This issue is shown in the examples of Figure 4, where chunksof 60 observations, sampled uniformly at random, were removed from each demonstration. RFFsoffer a more reliable option, in such a scenario.5.2 Heuristic SetupAfter validating our theoretical results, we now consider the more realistic setting in which the kernelhyperparameters and noise variance are unknown and need to be heuristically estimated from data.Note that this step can serve as preliminary stage with a few demonstrations before applying, e.g.,the incremental learning described in the previous section. Here, we train the RF-HGP as follows.Workflow with hyperparameters tuning Here, we consider the expectation-maximization (EM)training proposed by Kersting et al. [4], adapted to the RF-HGP. The algorithm comprises the follow-ing steps, which are performed independently for each DOF (all the GPs involved take time as input):1. train a first homoscedastic GP ( GP1), approximated with RFs (RF-GP), on the demonstrations’data with a maximum likelihood estimate (MLE), to retrieve θ;7Figure 7: Time taken to perform an MLE with an exact HGP and an RF-HGP, with demonstrationsof free motion (left), assembly (center) and bed-making (right) tasks. Median, the 15thand85thpercentiles, across all DOFs and 5 seeds.2. compute the mean-squared-error (MSE) between the posterior mean of GP1and the demonstra-tions’ data, at each time-step;3. train a second RF-GP ( GP2) by means of MLE, using the MSE as training data; GP2is asurrogate model of the time-dependent noise variance function;4. compute the posterior mean and variance of a new RF-HGP ( GP3) as in Equations (4) and (5),with the kernel hyperparameters θfrom step 1 and the noise variance values in Σnoiseandσ2noise,t∗given by the posterior mean of GP2, evaluated at the training and testing time-steps respectively;5. compute MSE between the posterior mean of GP3and the demonstrations, at each time-step;6. repeat from step 3 until convergence or until the maximum number of iterations is reached.Sampling first whole noise variance profiles from the posterior of GP2, and then whole trajectoriesfrom the posterior specified by Equations (1) and (2) conditioned on the noise variance, yieldstrajectories that are strongly correlated in the regions of low variance, and vice-versa.Results Figure 7 reports the time taken to complete an MLE for retrieving the GP hyperparame-ters, both in the exact and in the RFF setup. Considering the errors reported in Figures 5 and 6, weobserve that convergence to the exact HGP posterior can be heuristically attained in the worst possi-ble scenario of having no knowledge about the necessary GP priors. Again, the largest errors are at-tained by the assembly task, as discussed for the oracle setup. Concerning variational methods, suchas the sparse variational GP (SVGP) by Hensman et al. [44], Figure 7 shows that SVGP, employedin the EM training algorithm, requires many more iterations per step to converge, in the hyperparam-eter training, due to the greater complexity of the optimization problem being solved. To overcomethis issue, we set the maximum number of iterations per step to 100. However, as we can observefrom Figures 5 and 6, this choice hinders the convergence through the EM training process, and ourproposed feature-based approach has a strictly better accuracy-computational complexity trade-off.6 LimitationsThe assumptions behind our theoretical analysis were stated in Section 4.1. Moreover, as discussedin Section 4, our theoretical analysis focuses on an oracle setup. This type of scenario is standard inkernel theory [45, 20, 38], as it allows to decouple the error due to the kernel approximation fromthe uncertainty surrounding the GP hyperparameters. If this oracle setup does not hold, the HGPtraining method is heuristic, and uses two approximate GPs iteratively in an EM fashion. The errorsdisplayed in the heuristic experiments of Section 5.2 may change with a different training algorithm.7 ConclusionIn this work, we have studied the combination of heteroscedastic Gaussian processes and randomfeatures, used as scalable motion primitives in the context of learning from demonstration. In atheoretical analysis, we derived novel upper bounds on the approximation error, induced by randomfeatures, on the posterior mean and variance of a Gaussian process. Moreover, we have validatedthis approximate motion primitive w.r.t. relevant tasks for variable impedance control of roboticmanipulators, namely, a motion in free Cartesian space, an assembly task, and a bed-making skill.Our theoretical and empirical results demonstrate that random features are a theoretically soundapproximation method, that can be used to speed up the motion primitive fitting without sacrificingaccuracy. Moreover, we have shown that random features are well suited to incremental learningfrom demonstration, thanks to their data-independent nature.8AcknowledgmentsE. Caldarelli, A. Colom ́e and C. Torras acknowledge support from the project CLOTHILDE(“CLOTH manIpulation Learning from DEmonstrations”), funded by the European Research Coun-cil (ERC) under the European Union’s Horizon 2020 research and innovation programme (AdvancedGrant agreement No 741930). E. Caldarelli acknowledges travel support from ELISE (GA No951847). L. Rosasco acknowledges the financial support of the European Research Council (grantSLING 819789), the AFOSR projects FA9550-18-1-7009, FA9550-17-1-0390 and BAA-AFRL-AFOSR-2016-0007 (European Office of Aerospace Research and Development), the EU H2020-MSCA-RISE project NoMADS - DLV-777826, and the Center for Brains, Minds and Machines(CBMM), funded by NSF STC award CCF-1231216.References[1] A. G. Billard, S. Calinon, and R. Dillmann. Learning from humans. Springer handbook ofrobotics , pages 1995–2014, 2016.[2] C. K. Williams and C. E. Rasmussen. Gaussian processes for machine learning , volume 2.MIT press Cambridge, MA, 2006.[3] P. Goldberg, C. Williams, and C. Bishop. Regression with input-dependent noise: A Gaussianprocess treatment. Advances in Neural Information Processing Systems , 10, 1997.[4] K. Kersting, C. Plagemann, P. Pfaff, and W. Burgard. Most likely heteroscedastic Gaussianprocess regression. In International Conference on Machine Learning (ICML) , pages 393–400. PMLR, 2007.[5] M. Arduengo, A. Colom ́e, J. Lobo-Prat, L. Sentis, and C. Torras. Gaussian-process-based robotlearning from demonstration. Journal of Ambient Intelligence and Humanized Computing ,2023. doi:https://doi.org/10.1007/s12652-023-04551-7.[6] H. Liu, Y .-S. Ong, X. Shen, and J. Cai. When Gaussian process meets big data: A review ofscalable GPs. IEEE Transactions on Neural Networks and Learning Systems , 31(11):4405–4423, 2020.[7] M. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Interna-tional Conference on Artificial Intelligence and Statistics (AISTATS) , pages 567–574. PMLR,2009.[8] J. Hensman, N. Durrande, A. Solin, et al. Variational Fourier features for Gaussian processes.The Journal of Machine Learning Research , 18(1):5537–5588, 2017.[9] S. S ̈arkk ̈a and R. Pich ́e. On convergence and accuracy of state-space approximations of squaredexponential covariance functions. In IEEE International Workshop on Machine Learning forSignal Processing (MLSP) , 2014.[10] A. Solin and S. S ̈arkk ̈a. Hilbert space methods for reduced-rank Gaussian process regression.Statistics and Computing , 30(2):419–446, 2020.[11] M. Mutny and A. Krause. Efficient high dimensional Bayesian optimization with additivity andquadrature Fourier features. Advances in Neural Information Processing Systems , 31, 2018.[12] A. Rahimi and B. Recht. Random features for large-scale kernel machines. Advances in NeuralInformation Processing Systems , 20, 2007.[13] M. L ́azaro-Gredilla, J. Quinonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal.Sparse spectrum Gaussian process regression. The Journal of Machine Learning Research ,11:1865–1881, 2010.9[14] G. V . Karanikolas, Q. Lu, and G. B. Giannakis. Online unsupervised learning using ensembleGaussian processes with random features. In IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP) , pages 3190–3194, 2021.[15] Y . Pan, X. Yan, E. A. Theodorou, and B. Boots. Prediction under uncertainty in sparse spectrumGaussian processes with applications to filtering and control. In International Conference onMachine Learning (ICML) , pages 2760–2768. PMLR, 2017.[16] K. D. Polyzos, Q. Lu, and G. B. Giannakis. Graph-adaptive incremental learning using anensemble of Gaussian process experts. In IEEE International Conference on Acoustics, Speechand Signal Processing (ICASSP) , pages 5220–5224, 2021.[17] K. Cutajar, E. V . Bonilla, P. Michiardi, and M. Filippone. Random feature expansions fordeep Gaussian processes. In International Conference on Machine Learning (ICML) , pages884–893. PMLR, 2017.[18] D. Burt, C. E. Rasmussen, and M. Van Der Wilk. Rates of convergence for sparse variationalGaussian process regression. In International Conference on Machine Learning (ICML) , pages862–871. PMLR, 2019.[19] A. Dubey. No-regret algorithms for private Gaussian process bandit optimization. In In-ternational Conference on Artificial Intelligence and Statistics (AISTATS) , pages 2062–2070.PMLR, 2021.[20] A. Rudi and L. Rosasco. Generalization properties of learning with random features. Advancesin Neural Information Processing Systems , 30, 2017.[21] S. Calinon, I. Sardellitti, and D. G. Caldwell. Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies. In IEEE International Conference onIntelligent Robots and Systems (IROS) , pages 249–254, 2010.[22] L. Peternel, T. Petri ˇc, and J. Babi ˇc. Robotic assembly solution by human-in-the-loop teachingmethod based on real-time stiffness modulation. Autonomous Robots , 42:1–17, 2018.[23] D. Parent, A. Colom ́e, and C. Torras. Variable impedance control in cartesian latent spacewhile avoiding obstacles in null space. In IEEE International Conference on Robotics andAutomation (ICRA) , pages 9888–9894, 2020.[24] E. Caldarelli, A. Colom ́e, and C. Torras. Perturbation-based stiffness inference in variableimpedance control. IEEE Robotics and Automation Letters , 7(4):8823–8830, 2022.[25] A. Paraschos, C. Daniel, J. R. Peters, and G. Neumann. Probabilistic movement primitives.Advances in Neural Information Processing Systems , 26, 2013.[26] G. J. Maeda, G. Neumann, M. Ewerton, R. Lioutikov, O. Kroemer, and J. Peters. Proba-bilistic movement primitives for coordination of multiple human–robot collaborative tasks.Autonomous Robots , 41:593–612, 2017.[27] S. Gomez-Gonzalez, G. Neumann, B. Sch ̈olkopf, and J. Peters. Adaptation and robust learningof probabilistic movement primitives. IEEE Transactions on Robotics , 36(2):366–379, 2020.[28] G. Li, Z. Jin, M. V olpp, F. Otto, R. Lioutikov, and G. Neumann. Prodmp: A unified perspectiveon dynamic and probabilistic movement primitives. IEEE Robotics and Automation Letters , 8(4):2325–2332, 2023.[29] Y . Huang, L. Rozo, J. Silv ́erio, and D. G. Caldwell. Kernelized movement primitives. TheInternational Journal of Robotics Research , 38(7):833–852, 2019.[30] M. A. Alvarez, L. Rosasco, N. D. Lawrence, et al. Kernels for vector-valued functions: Areview. Foundations and Trends® in Machine Learning , 4(3):195–266, 2012.10[31] N. Jaquier, D. Ginsbourger, and S. Calinon. Learning from demonstration with model-basedGaussian process. In Conference on Robot Learning , pages 247–257. PMLR, 2020.[32] J. Wilson, V . Borovitskiy, A. Terenin, P. Mostowsky, and M. Deisenroth. Efficiently samplingfunctions from Gaussian process posteriors. In International Conference on Machine Learning ,pages 10292–10302. PMLR, 2020.[33] D. R. Burt, C. E. Rasmussen, and M. Van Der Wilk. Convergence of sparse variational in-ference in Gaussian processes regression. The Journal of Machine Learning Research , 21(1):5120–5182, 2020.[34] V . Dutordoir, N. Durrande, and J. Hensman. Sparse Gaussian processes with spherical har-monic features. In International Conference on Machine Learning , pages 2793–2802. PMLR,2020.[35] M. Arduengo, A. Colom ́e, J. Borr `as, L. Sentis, and C. Torras. Task-adaptive robot learningfrom demonstration with Gaussian process models under replication. IEEE Robotics and Au-tomation Letters , 6(2):966–973, 2021.[36] M. A. Woodbury. Inverting modified matrices . Statistical Research Group, 1950.[37] C. Williams and M. Seeger. Using the Nystr ̈om method to speed up kernel machines. Advancesin Neural Information Processing Systems , 13, 2000.[38] X. Lu, A. Rudi, E. Borgonovo, and L. Rosasco. Faster kriging: Facing high-dimensionalsimulators. Operations Research , 68(1):233–249, 2020.[39] B. Sriperumbudur and Z. Szab ́o. Optimal rates for random Fourier features. Advances inNeural Information Processing Systems , 28, 2015.[40] F. Liu, X. Huang, Y . Chen, and J. A. Suykens. Random features for kernel approximation: Asurvey on algorithms, theory, and beyond. IEEE Transactions on Pattern Analysis and MachineIntelligence , 44(10):7128–7148, 2021.[41] A. G. d. G. Matthews, M. Van Der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. Le ́on-Villagr ́a, Z. Ghahramani, and J. Hensman. Gpflow: A Gaussian process library using tensor-flow. The Journal of Machine Learning Research , 18(40):1–6, 2017.[42] Y . Gal and R. Turner. Improving the Gaussian process sparse spectrum approximation by rep-resenting uncertainty in frequency inputs. In International Conference on Machine Learning(ICML) , pages 655–664. PMLR, 2015.[43] R. Camoriano, S. Traversaro, L. Rosasco, G. Metta, and F. Nori. Incremental semiparametricinverse dynamics learning. In IEEE International Conference on Robotics and Automation(ICRA) , pages 544–550, 2016.[44] J. Hensman, N. Fusi, and N. D. Lawrence. Gaussian processes for big data. In Uncertainty inArtificial Intelligence , page 282. Citeseer, 2013.[45] A. Rudi, R. Camoriano, and L. Rosasco. Less is more: Nystr ̈om computational regularization.Advances in Neural Information Processing Systems , 28, 2015.11Table 1: Summary of notations.Variable Meaningn Number of training samplesT Number of testing samplesd Degrees of freedom (one HGP for each)α |ψj(t)| ≤α(cf. Assumption 4.2)κ k (t, t′)≤κ(cf. Assumption 4.1)ν ν = max 1≤j≤d∥1√nyj∥Σnoise∈Rn×nDiagonal time-varying noise variance at training pointsR= Σ noise/n Normalized noise variance at training pointsσ2noise,t∗∈R Noise variance at testing point t∗r2t∗=σ2noise,t∗/n Normalized noise variance at testing point t∗0< γ < 1 Rii> γ,1≤i≤nS:H →RnSampling operator with normalization n−1/2L=SS∗=K/n∈Rn×nNormalized Gram matrix with exact kernelLR∈Rn×nLR=L+Rψj(t) =ψ(ωj, t) Element of approximate feat. vector ̃φ(t) =m−1/2[ψ1(t), . . . , ψ m(t)]TApproximate feat. vectorSm:Rm→RnSampling operator with normalization n−1/2Lm=SmS∗m= ̃K/n∈Rn×nNormalized Gram matrix with RF kernelLm,R∈Rn×nLm,R=Lm+RA HGP Posterior Equations RevisitedIn this section, we will rewrite the exact and approximated HGP posterior equations from Sec-tion 3 in terms of standard linear operators used in RKHS theory. For a linear operator A,we denote its adjoint by A∗. Let Hbe the RKHS associated to the kernel of interest. Inorder to retrieve a suitable expression, we denote S:H → Rnthe sampling operator de-fined as Sf:=1√n[f(t1), . . . f (tn)]T. Moreover, the adjoint of the sampling operator is de-fined as S∗:Rn→ H :S∗a=1√nPni=1aik(ti,·),aibeing the i-th entry of a. Now, letL:Rn→Rn, L:=SS∗. Note that K=nL. Let R=1nΣnoise, letrt∗=1nσ2noise,t∗. Lastly,let⟨·,·⟩Rndenote the inner product of n-dimensional vectors. With this notation, let us considera single DOF of the trajectory to be processed. The posterior mean of the associated exact HGPfrom Equation (1) isμpost(t∗) =(L+R)−1Sk(t∗,·),1√nyRn. (7)Moreover, the posterior variance from Equation (2) is given byσ2post(t∗) =k(t∗, t∗) +nrt∗− ⟨Sk(t∗,·),(L+R)−1Sk(t∗,·)⟩Rn. (8)Considering RFs, we can define the operator Sm:Rm→Rn, Sm:=1√n[ ̃φ(t1). . . , ̃φ(tn)]T, andLm:Rn→Rn, Lm:=SmS∗m. With this notation, let us consider a single DOF of the trajectoryto be processed. The RF-based posterior mean of the associated HGP from Equation (4) can berewritten as ̃μpost(t∗) =(Lm+R)−1Sm ̃φ(t∗),1√nyRn. (9)On the other hand, the RF-based posterior variance from Equation (5) is given by ̃σ2post(t∗) = ̃k(t∗, t∗) +nrt∗− ⟨Sm ̃φ(t∗),(Lm+R)−1Sm ̃φ(t∗)⟩Rn. (10)A summary of the main operators and constants that will appear in the proofs can be found in Table 1.12Fast matrix inversion By defintion, the operators LmandSmare matrices. The inversion ofthe matrix Lm+Rappearing in Equations (9) and (10) can be performed by means of Woodburyidentity [36], as follows:L−1m,R= (SmS∗m+R)−1(11)=R−1−R−1Sm(I+S∗mR−1Sm)−1S∗mR−1. (12)The latter expression involves inverting an m×mmatrix, which boosts the speed of the HGPposterior calculation if m≪n.B Proofs of the Main ResultsIn this appendix, we report the proofs of the two main theoretical results of our paper, along withsome technical propositions that will be extensively used. In the following, we denote by ARtheoperator A+R, with Rdiagonal positive definite matrix, and by Aγthe operator A+γI. Moreover,in the remainder, ∥·∥denotes the operator norm, while ∥·∥2denotes the Euclidean norm of a vector.B.1 Useful PropositionsIn this part, we report three propositions that will be useful in the proofs.Proposition B.1 (Proposition 8 of [20]) .LetHbe a separable Hilbert space, A, B be two boundedself-adjoint positive linear operators on H, and λ >0. Then∥A−1/2λB1/2∥ ≤ ∥ A−1/2λB1/2λ∥ ≤1(1−β)1/2, (13)whereβ=λmaxhB−1/2λ(B−A)B−1/2λi. (14)Proposition B.2. LetSm:Rm→Rn, Sm:=1√n[ ̃φ(t1). . . , ̃φ(tn)]T, and assume that the entriesof the RF vectors are bounded, that is, |ψj(t)| ≤α,∀j∈ {1, . . . , m }. Then,∥Sm∥ ≤α. (15)Proof. The result follows from the definition of operator norm:∥Sm∥= supa∈Rm,∥a∥2≤1∥Sma∥2 (16)= supa∈Rm,∥a∥2≤11√nq⟨ ̃φ(t1),a⟩22+···+⟨ ̃φ(tn),a⟩22 (17)≤1√n√nα2=α, (18)as reported in the statement.Proposition B.3. LetAbe a bounded positive semi-definite operator, and let AR:=A+R, with Rdiagonal positive definite and Aγ=A+γI. Lastly, assume all entries in Rare greater or equal toγ. Then,∥A−1/2RA1/2γ∥ ≤1. (19)Proof. Noting that AR−Aγ= (R−γI)≽0by hypothesis, it holds Aγ≼ARand thus∥A−1/2RA1/2γ∥2=∥A−1/2RAγA−1/2R∥ ≤ ∥ I∥= 1. (20)13B.2 Proof of Theorem 4.4 (Deviation of Approximate Posterior Mean)We report here the proof of Theorem 4.4. We start by considering a single DOF, and generalize toad-valued GP at the end of this section. We begin by proving a lemma that will be used to retrievethe main result.Lemma B.4. Letm≥813+α2γlog(8α2γδ), and δ= (0,1]. Then, the following bound holds, withprobability at least 1−δ,∥(L−1m,R−L−1R)Sk(t∗,·)∥2≤√2κ√γ2 log8κ2γδ(1 +α2/γ)3m+s2 log8κ2γδα2γm. (21)Proof. In order to bound the term of interest, we can use the fact that, for any invertible matrices AandB,A−1−B−1=A−1(I−AB−1) =A−1(B−A)B−1, and Proposition B.3, as follows:∥(L−1m,R−L−1R)Sk(t∗,·)∥2 (22)=∥L−1m,R(LR−Lm,R)L−1RSk(t∗,·)∥2 (23)=∥L−1/2m,RL−1/2m,RL1/2m,γL−1/2m,γL1/2γL−1/2γ(L−Lm)L−1RSk(t∗,·)∥2 (24)≤1√γ∥L−1/2m,RL1/2m,γ∥∥L−1/2m,γL1/2γ∥∥L−1/2γ(L−Lm)L−1RSk(t∗,·)∥2 (25)≤κ√γ∥L−1/2m,γL1/2γ∥∥L−1/2γ(L−Lm)L−1/2γ∥∥L−1/2RS∥. (26)We can now proceed to bound each of the three factors. To start off, let us consider ∥L−1/2RS∥. Thisterm can be bounded by using the polar decomposition of the bounded linear operator S, as follows.LetS= (SS∗)1/2U, where Uis a partial isometry. By Proposition B.3, the definition of polardecomposition, and by considering that L≼Lγby definition,∥L−1/2RS∥=∥L−1/2R(SS∗)1/2U∥ (27)≤ ∥L−1/2RL1/2∥∥U∥ (28)≤ ∥L−1/2RL1/2γ∥∥U∥ (29)≤1. (30)Now, we can move on to bound ∥L−1/2γ(L−Lm)L−1/2γ∥. To do so, we can observe that, bydefinition,Lm=SmS∗m (31)=1n1mmXi=1"ψi(t1). . .ψi(tn)#⊗"ψi(t1). . .ψi(tn)#. (32)Moreover, due to linearity of expectation,Eω[Lm] =L. (33)We can therefore apply Proposition C.4, with p=m,Q=L, and Qp=Lm. Note that TrLis thetrace of the normalized Gram matrix1nKand hence is smaller or equal to κ2under Assumption 4.1.Lastly, the value of the constant F∞(γ)in Proposition C.4 can be computed as follows:*1√n"ψi(t1). . .ψi(tn)#,1√nL−1γ"ψi(t1). . .ψi(tn)#+Rn≤α2γ. (34)Thus, we obtain, with probability at least 1−δ,∥L−1/2γ(L−Lm)L−1/2γ∥ ≤2 log8κ2γδ(1 +α2/γ)3m+s2 log8κ2γδα2γm. (35)14To conclude the proof, we can bound ∥L−1/2m,γL1/2γ∥. By Proposition B.1, we have that∥L−1/2m,γL1/2γ∥ ≤1(1−β)1/2,where β=λmaxhL−1/2γ(L−Lm)L−1/2γi. (36)According to Equation (33), we can apply Proposition C.4 and see that with probability at least 1−δβ≤2 log8κ2γδ3m+s2 log8κ2γδα2γm≤0.5 (37)provided that m≥813+α2γlog(8α2γδ).Proof of Theorem 4.4 In order to retrieve the main concentration result, we can consider the fol-lowing decomposition of the error on the posterior mean. By Cauchy-Schwarz inequality and Equa-tions (7) and (9),| ̃μpost(t∗)−μpost(t∗)|=L−1m,RSm ̃φ(t∗)−L−1RSk(t∗,·),1√nyRn(38)≤ν∥L−1m,RSm ̃φ(t∗)−L−1m,RSk(t∗,·) +L−1m,RSk(t∗,·)−L−1RSk(t∗,·)∥2(39)≤ν∥L−1m,R(Sm ̃φ(t∗)−Sk(t∗,·))∥2+ν∥(L−1m,R−L−1R)Sk(t∗,·)∥2(40)≤ν/γ∥Sm ̃φ(t∗)−Sk(t∗,·)∥2+ν∥(L−1m,R−L−1R)Sk(t∗,·)∥2. (41)Now, we can upper bound the two norms appearing in the expression above. The first addend can bedirectly bounded by applying Corollary C.3. The second addend in Equation (41) can be boundedby Lemma B.4. Hence, we obtain the following bound with probability at least 1−δ:| ̃μpost(t∗)−μpost(t∗)| ≤ν/γ∥Sm ̃φ(t∗)−Sk(t∗,·)∥2+ν∥(L−1m,R−L−1R)Sk(t∗,·)∥2 (42)≤s2ν2α4log2Tnδmγ2+√2κν√γ2 log8κ2γδ(1 +α2/γ)3m+s2 log8κ2γδα2γm.(43)The final result for the vector-valued GP can be obtained by applying a union bound.B.3 Proof of Theorem 4.5 (Deviation of Approximate Posterior Variance)In this section, we prove our result related to the concentration of the approximate posterior variance.Again, we begin by stating some lemmas that will be used in the proof.Lemma B.5. Letδ= (0,1]. Then, the following bound holds, with probability at least 1−δ,|⟨Sk(t∗,·)−Sm ̃φ(t∗), L−1RSk(t∗,·)⟩Rn| ≤s2κ2α4log2Tnδγm. (44)Proof. By Cauchy-Schwarz,|⟨Sk(t∗,·)−Sm ̃φ(t∗), L−1RSk(t∗,·)⟩Rn| ≤ ∥Sk(t∗,·)−Sm ̃φ(t∗)∥2∥L−1RSk(t∗,·)∥2. (45)15By using the polar decomposition of S, for a suitable partial isometry operator U, and accordingto Propositions B.1 and B.3∥L−1RSk(t∗,·)∥2≤ ∥L−1R(SS∗)1/2U∥∥k(t∗,·)∥H (46)≤κ∥L−1RL1/2∥ (47)≤κ∥L−1/2R∥∥L−1/2RL1/2∥ (48)≤κ√γ∥L−1/2RL1/2γ∥∥L−1/2γL1/2∥ (49)≤κ√γ∥L−1/2γL1/2γ∥ (50)≤κ√γ. (51)To conclude the proof, we can observe that, according to Corollary C.3,∥Sk(t∗,·)−Sm ̃φ(t∗)∥2≤s2α4log2Tnδm. (52)Lemma B.6. Letδ= (0,1]. Then, the following bound holds, with probability at least 1−δ,|⟨Sm ̃φ(t∗), L−1R(Sk(t∗,·)−Sm ̃φ(t∗))⟩Rn| ≤α2γs2α4log2Tnδm. (53)Proof. By Cauchy-Schwarz inequality and Proposition B.2,|⟨Sm ̃φ(t∗), L−1R(Sk(t∗,·)−Sm ̃φ(t∗))⟩Rn| ≤ ∥Sm ̃φ(t∗)∥2∥L−1R(Sk(t∗,·)−Sm ̃φ(t∗))∥2(54)≤ ∥Sm∥∥ ̃φ(t∗)∥2∥L−1R(Sk(t∗,·)−Sm ̃φ(t∗))∥2(55)≤α2γ∥Sk(t∗,·)−Sm ̃φ(t∗)∥2. (56)Now, we can again observe that, according to Corollary C.3,∥Sk(t∗,·)−Sm ̃φ(t∗)∥2≤s2α4log2Tnδm, (57)which concludes the proof.Lemma B.7. Letm≥813+α2γlog(8α2γδ), and δ= (0,1]. Then, the following bound holds, withprobability at least 1−δ,|⟨Sm ̃φ(t∗),(L−1R−L−1m,R)Sm ̃φ(t∗)⟩Rn| ≤α3√2√γ2 log8κ2γδ(1 +α2/γ)3m+s2 log8κ2γδα2γm.(58)16Proof. Firstly, we can observe that, by Cauchy-Schwarz inequality, Propositions B.2 and B.3, thepolar decomposition of Sm, and the fact that Lm≼Lm,γby definition, we have that|⟨Sm ̃φ(t∗),(L−1R−L−1m,R)Sm ̃φ(t∗)⟩Rn|=|⟨Sm ̃φ(t∗), L−1m,R(L−Lm)L−1RSm ̃φ(t∗)⟩Rn| (59)=|⟨L−1/2m,RSm ̃φ(t∗), L−1/2m,R(L−Lm)L−1RSm ̃φ(t∗)⟩Rn| (60)≤ ∥L−1/2m,RSm ̃φ(t∗)∥2∥L−1/2m,R(L−Lm)L−1RSm ̃φ(t∗)∥2 (61)≤α3∥L−1/2m,R(SmS∗m)1/2U∥∥L−1/2m,RL1/2m,γL−1/2m,γ(L−Lm)L−1RSm ̃φ(t∗)∥2 (62)≤α3∥L−1/2m,RL1/2m,γ∥∥L−1/2m,γL1/2γ∥∥L−1/2γ(L−Lm)L−1/2γ∥|L1/2γL−1/2R∥∥L−1/2R∥(63)≤α3√γ∥L−1/2m,γL1/2γ∥∥L−1/2γ(L−Lm)L−1/2γ∥. (64)Now, we can bound the two factors. According to Propositions B.1 and C.4, with probability at least1−δ, forδ∈(0,1]andm≥813+α2γlog(8α2γδ), we have that∥L−1/2m,γL1/2γ∥∥L−1/2γ(Lγ−Lm,γ)L−1/2γ∥ ≤√22 log8κ2γδ(1 +α2/γ)3m+s2 log8κ2γδα2γm,(65)concluding the proof.Proof of Theorem 4.5 We are now ready to prove Theorem 4.5. According to Equations (8)and (10), and similarly to what we did for the posterior mean, we can decompose the error on thevariance of a single DOF as follows:|σ2post(t∗)−σ2post(t∗)|=|k(t∗, t∗)− ⟨Sk(t∗,·), L−1RSk(t∗,·)⟩Rn− ̃k(t∗, t∗) +⟨Sm ̃φ(t∗), L−1m,RSm ̃φ(t∗)⟩Rn| (66)≤|k(t∗, t∗)− ̃k(t∗, t∗)|+|⟨Sk(t∗,·), L−1RSk(t∗,·)⟩Rn− ⟨Sm ̃φ(t∗), L−1m,RSm ̃φ(t∗)⟩Rn|(67)≤|k(t∗, t∗)− ̃k(t∗, t∗)|+|⟨Sk(t∗,·)−Sm ̃φ(t∗), L−1RSk(t∗,·)⟩Rn|+|⟨Sm ̃φ(t∗), L−1RSk(t∗,·)−L−1m,RSm ̃φ(t∗)⟩Rn| (68)≤|k(t∗, t∗)− ̃k(t∗, t∗)|+|⟨Sk(t∗,·)−Sm ̃φ(t∗), L−1RSk(t∗,·)⟩Rn|+|⟨Sm ̃φ(t∗), L−1R(Sk(t∗,·)−Sm ̃φ(t∗))⟩Rn|+|⟨Sm ̃φ(t∗),(L−1R−L−1m,R)Sm ̃φ(t∗)⟩Rn|. (69)Now, we can upper bound the four addends appearing in the decomposition above. The first addendcan by directly bounded by Corollary C.2. The second addend of the decomposition in Equation (69)can be bounded by Lemma B.5. The third addend in Equation (69) can be bounded by Lemma B.6.The last addend in Equation (69) can be bounded by Lemma B.7. In this way, we retrieve the resultof Theorem 4.5, obtaining the following bound holding with probability at least 1−δ. Having defined17C:=q2α4log2Tδm+q2κ2α4log2Tnδγm+α2γq2α4log2Tnδm+α3√2√γ"2 log8κ2γδ(1+α2/γ)3m+r2 log8κ2γδα2γm#:|σ2post(t∗)−σ2post(t∗)| ≤ |⟨ k(t∗,·), k(t∗,·)⟩H− ⟨ ̃φ(t∗), ̃φ(t∗)⟩Rm|+⟨Sk(t∗,·)−Sm ̃φ(t∗), L−1RSk(t∗,·)⟩Rn|+|⟨Sm ̃φ(t∗), L−1R(Sk(t∗,·)−Sm ̃φ(t∗))⟩Rn|+|⟨Sm ̃φ(t∗),(L−1R−L−1m,R)Sm ̃φ(t∗)⟩Rn| (70)≤C. (71)The final result for the vector-valued GP can be obtained by applying a union bound.C Concentration ResultsWe first provide a few lemmas for the concentration of the approximate kernel functions that derivefrom Hoeffding inequality, and then a lemma for the concentration of random operators that derivesfrom Bernstein inequality. Again, we denote by Aγthe operator A+γI.∥·∥denotes the operatornorm, while ∥·∥2denotes the Euclidean norm of a vector.C.1 Approximation of the Kernel FunctionNote that if a uniform convergence of the RF-HGP posterior is seeked w.r.t. the domain of the func-tion modelled with the HGP, our proofs could be adapted by replacing the following Corollary C.2with a uniform convergence result. For instance, in the case of RFFs, such a result can be foundin [39, Theorem 1 and Remark 1].Lemma C.1. Letδ= (0,1]. Then, for any (t1, t2), with probability at least 1−δ, it holds ̃φ(t1)T ̃φ(t2)−k(t1, t2)≤s2α4log2δm. (72)Proof. To upper bound the quantity of interest, we can use Hoeffding’s inequality for boundedrandom variables. Let Aj(t1, t2) := ψj(t1)ψj(t2)−Eωψ(ω, t1)ψ(ω, t2). Since −α2≤ψj(t1)ψj(t2)≤α2according to Assumption 4.2, by Hoeffding inequality, we have thatPr1mmXj=1Aj(t1, t2)≥tm≤2e−2t24mα4. (73)Therefore, by setting the above upper bound smaller than δ, forδ∈(0,1], we get that with proba-bility at least 1−δ ̃φ(t1)T ̃φ(t2)−k(t1, t2)=1mmXj=1Aj(t1, t2)≤s2α4log2δm. (74)Corollary C.2. Letδ= (0,1]. Then with probability at least 1−δ, it holds ̃φ(t∗)T ̃φ(t∗)−k(t∗, t∗)≤s2α4log2|T |δm,∀t∗∈ T. (75)Proof. We apply Lemma C.1 on each element of Twithδ′:=δ/T. The claimed result then followsusing a union bound.18Corollary C.3. Letδ= (0,1]. Then with probability at least 1−δ,∥Sm ̃φ(t∗)−Sk(t∗,·)∥2≤s2α4log2|T |nδm,∀t∗∈ T. (76)Proof. It holds∥Sm ̃φ(t∗)−Sk(t∗,·)∥22=1nnXi=1h ̃φ(ti)T ̃φ(t∗)−k(ti, t∗)i2(77)The result thus follows from applying nTtimes Lemma C.1 on the pairs ((ti, t∗))1≤i≤n,t∗∈Twithprobability δ′:=δ/(nd)and using a union bound.C.2 Concentration of the Kernel matrixThe following result derives from the Bernstein inequality for sums of random operators on separa-ble Hilbert spaces in operator norm.Proposition C.4 (Proposition 6 and Remark 10 of [20]) .Letv1, ...,vpwithp≥1, be independentand identically distributed random vectors on a separable Hilbert spaces Hsuch that Q=Ev⊗vis trace-class, and for any λ >0there exists a constant F∞(λ)<∞such that ⟨v,(Q+λI)−1v⟩ ≤F∞(λ)almost everywhere. Let Qp=1pPpi=1vi⊗viand take 0< λ≤ ∥Q∥. Then for any δ≥0,the following holds with probability at least 1−δ:∥Q−1/2λ(Q−Qp)Q−1/2λ∥ ≤2w(1 +F∞(λ))3p+s2wF∞(λ)p(78)where w= log8 TrQλδ. Moreover, with the same probability,λmaxhQ−1/2λ(Q−Qp)Q−1/2λi≤2w3p+s2wF∞(λ)p. (79)Moreover, for any s∈(0,1], if∥vi∥ ≤α, we have that, with probability at least 1−δ,λmaxhQ−1/2λ(Q−Qp)Q−1/2λi≤s. (80)provided that p≥2t22t3+F∞(γ)log8α2λδandλ≤ ∥Q∥.D Efficient Matrix Inversion and Online UpdatesIn this section, we show how the expression of the posterior mean and variance can easily be updatedwhen adding new samples to the dataset.We recall that the operators SmandLm,Rare matrices and are defined in Appendix A. As discussedin Appendix A, the inversion of Lm,R, involved the posteriors of Equations (9) and (10), can besimplified by applying Woodbury identity [36], as follows:L−1m,R= (SmS∗m+R)−1(81)=R−1−R−1Sm(I+S∗mR−1Sm)−1S∗mR−1. (82)The posterior mean of the HGP in Equation (9) becomes: ̃μpost(t∗) =⟨R−1−R−1Sm(I+S∗mR−1Sm)−1S∗mR−1Sm ̃φ(t∗),1√ny⟩Rn (83)= ̃φ(t∗)TI−S∗mR−1Sm(I+S∗mR−1Sm)−1S∗mR−11√ny (84)= ̃φ(t∗)TI−B(I+B)−1A (85)19where A:=1√nS∗mR−1y∈RmandB:=S∗mR−1Sm∈Rm×m. Moreover, the only term in theexpression of the posterior variance of Equation (10) ̃σ2post(t∗) =⟨ ̃φ(t∗), ̃φ(t∗)⟩Rm+nrt∗− ⟨Sm ̃φ(t∗),(Lm+R)−1Sm ̃φ(t∗)⟩Rn (86)which varies with nis⟨Sm ̃φ(t∗),(Lm+R)−1Sm ̃φ(t∗)⟩Rn (87)= ̃φT(t∗)S∗mR−1Sm−S∗mR−1Sm(I+S∗mR−1Sm)−1S∗mR−1Sm ̃φ(t∗), (88)= ̃φT(t∗)B−B(I+B)−1B ̃φ(t∗). (89)When a new human demonstration is gathered, the training set is enlarged by adding nnewtrainingpoints. This means that the matrix Smis updated by adding nnewrows (and renormalized), contain-ing the RF embeddings of the new training points. The same happens to vector yand to the diagonalmatrix R, which is enlarged by adding nnewrows and columns. This means that matrices AandBsupport online updates. In particular, after initializing AandBto the null matrix, having collectedthe new embeddings in Sm,new ∈Rnnew×m(with normalization n−1/2new ) and the new noise variancevalues in Rnew∈Rnnew×nnew(with normalization n−1new), the updates are as follows:A←A+1√nnewS∗m,new R−1newynew (90)B←B+S∗m,new R−1newSm,new . (91)Having computed the updates, the matrices appearing in the posterior mean and variance can becomputed in constant time w.r.t. the current size of the training set during the data streaming.20 |
g_PPHV_GkX | Hierarchical Planning for Rope Manipulation usingKnot Theory and a Learned Inverse ModelMatan Sudry, Tom Jurgenson, Aviv Tamar and Erez KarpasTechnion — Israel Institute of Technology, Haifa, Israel{matansudry, tomj }@campus.technion.ac.il, {avivt, karpase }@technion.ac.ilAbstract: This work considers planning the manipulation of deformable 1-dimensional objects such as ropes or cables, specifically to tie knots. We proposeTWISTED: Tying With Inverse model and Search in Topological space Exclud-ing Demos, a hierarchical planning approach which, at the high level, uses ideasfrom knot theory to plan a sequence of rope topological states, while at the lowlevel uses a neural-network inverse model to move between the configurations inthe high-level plan. To train the neural network, we propose a self-supervisedapproach, where we learn from random movements of the rope. To focus therandom movements on interesting configurations, such as knots, we propose anon-uniform sampling method tailored for this domain. In a simulation, we showthat our approach can plan significantly faster and more accurately than baselines.We also show that our plans are robust to parameter changes in the physical sim-ulation, suggesting future applications via sim2real.Keywords: Knot tying, Learning, Planning1 IntroductionDeformable object manipulation is important for many applications, such as manufacturing androbotic surgery. In particular, manipulating 1-dimensional (1D) objects such as ropes, cables, andhoses, is a challenging and exciting research area that has drawn recent attention [1, 2, 3, 4, 5, 6, 7,8, 9, 10]. There are several challenges to 1D object manipulation.Representing the state of the object is difficult, as unlike rigid objects, the object may have infinitedegrees of freedom [11, 12, 13]. Perception of a rope-like object is complex due to self-occlusions,the similarity between different rope parts, and self-loops [1, 14, 15, 16, 17, 18]. Planning typicallyrequires an effective abstraction of the states and the actions, which may be difficult to define [19,20], and low-level control for executing a plan must handle the flexibility and deformability of therope – all non-trivial control problems [3, 21, 22]. To the best of our knowledge, a system that cangenerally manipulate 1D objects is beyond the capabilities of current technology. Our focus in thiswork is on the planning component in 1D manipulation, particularly rope manipulation and knottying. As mentioned above, planning for rope manipulation is non-trivial, as the state space maybe large or infinite, and tasks like knot-tying essentially have a ‘needle in a haystack’ characteristicand require exhaustive exploration to reach desired states. Accordingly, most prior studies on ropemanipulation relied on human demonstrations in lieu of automatic search [3, 23, 5, 24, 25, 6, 26, 27].This paper tackles the problem of rope manipulation planning without any demonstrations. Ourmain contribution is a hierarchical search algorithm that exploits prior knowledge about knot-tyinggeometry for its high-level plan, with self-supervised learning of an inverse model for executingthe low-level control, which we call Tying With Inverse model and Search in Topological spaceExcluding Demos – TWISTED. TWISTED is trained and evaluated in a physical simulation. Wedemonstrate, however, that our planning results are robust to variations of physical properties suchas friction. Thus, we believe that our planning approach can be integrated with real-robot perceptionand control for a complete 1D manipulation system in the future. We demonstrate that TWISTED7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.can tie various types of knots and can generalize to tie knots that were not seen during the training.To the best of our knowledge, this is the first demonstration of such a capability, which cannot beobtained by previous work that required a human demonstration of the knot to tie. Finally, whileTWISTED is tailored for knot tying by building on knot theory for high-level planning, our generalmethodology may be useful for other tasks where a well-established theory may inform the high-level characterization of the problem, while a learning-based method is used for low-level control.2 BackgroundFigure 1: P-data Topological state representation: each columncorresponds to an intersection along the rope of Llinks. Rowone is ordered from 1toLin ascending order. Row two, for eachintersection, defines the other link in the intersection. The label”o”/”u” classifies the vertical arrangement at each intersection(over or under). Finally the last row identifies the ”sign” – seeappendix Section 9.1. E.g. in the center state representation link1 isover link 2 with a ”+” signIn our work, we build on knot-theory for high-level planning.In this section, we give a briefoverview of knot theory. Themost common way to solveproblems like knot-tying, witha high-dimensional and continu-ous state and action spaces, andlong-horizon planning is to sep-arate it into a topological repre-sentation for high-level planningand a geometric representationfor low-level control, which issolved using learning [24]. Werepresent a rope as having Llinks, and denote by q∈Qtherope configuration, with Q⊆R2L+5. The first seven coordinates describe the global position of the middle rope link (position(x, y, z )and quaternion representation for the rotation), and the remaining L−1joints are eachdescribed by yaw and pitch values of the i−1link relative to link i.We follow Yan et al. [3], where the discrete topological representation for SisP-data (see appendixSection 9.1), and we denote Top:Q→Sthe mapping from a configuration to its topological state.The “complexity” of a topological state s∈Sis defined according to the number of crosses (linkintersections, see Section 9.1) it represents figure 1. Knot theory [28] suggests Reidemeister movesas actions that transition the rope between topological states. In this work, we will use them ashigh-level actions during the search. We denote the space of Reidemeister moves as ARandPR:S×AR→Sas the transition function of topological states using Reidemeister moves. Reidemeister[28] proved that between any two topological states s, s′∈S, there exists a sequence of actions thatstarts in sand ends in s′, namely, ∃a0, . . . , a k∈ARs.t.S′=PR(. . . P R(PR(s, a0), a1). . . , a k).The Reidemeister moves are (1) Reidemeister I (R1) which moves one segment to create a newloop, (2) Reidemeister II (R2) which pulls the middle segment and creates a new intersection withopposite signs, and (3)the Cross (C) creates a new intersection between two segments. Examples ofthose actions can be visualized in Figure 4 in the appendix.Considering the knot-tying problem as a trajectory over topological states with Reidemeister movesas actions, translates the original problem of directly manipulating rope configurations to a problemof a shorter horizon and a “lower” branching factor. This approach has been adopted in differentalgorithms [3, 23, 5, 24, 25].Finally, we use the topological motion primitives action space [3]: When manipulating a rope withLlinks, an action is a curve c∈C, parameterized by the link to grab l∈[1, L]1, an endpointin(x, y)(in the workspace), and the maximal height zmax. We denote the transition function forcurves applied on configurations by f:Q×C→Q. Yan et al. [3] observed that the space of curvesCapproximates well all the possible Reidemeister moves available from a given topological state.1We associate a fixed point for every link.2Figure 2: TWISTED: Given initial and goal topological states, we iteratively call a high-level plan-ner to find plans to follow (top row). The plan uses an inverse model to transition between consec-utive topological states (bottom right). When following a plan, new information is integrated into atree of all known configurations, which seeds the high-level planner with initial states (bottom left).Gray boxes are high-level states, green boxes are low-level states (rope configuration), blue boxesrepresent the inverse model, and our environment in Mujoco is red. See Section 4.For this reason, in this work, we follow Yan et al. [3], and plan using Reidemeister moves whilemanipulating the rope with curves.3 Related workManipulating deformable objects presents varying degrees of difficulty [29]. For rope manipulation,recent studies focused on either learning from human demonstrations [2, 6, 3] or solving short-termplans (e.g., changing the shape of the rope, but not tying a knot) through pick-and-place actions[7, 27]. Differently, in our work we tackle long-term planning, such as knot tying, without demon-strations. To handle the challenging deformable-object dynamics of ropes, previous methods usedself-supervised learning [10, 26, 17]. We also use self-supervised collected data to train our inversemodel. Finally, some previous works attempted to learn rope manipulation using reinforcementlearning (RL) methods [30, 31, 32]. However, as we show, our knot-tying tasks are too complicatedfor off-the-shelf RL algorithms (see Section 5).Several recent works studied the simpler task of rope un-tying [33, 34, 35, 15, 36]. In particular,[33, 34] used graph-based search algorithms. Knot un-tying is simpler because the end goal is al-ways the same (untangled rope), while in our work we specifically focus on a large space of possiblegoals, which renders un-tying methods inapplicable.4 TWISTEDIn this section we describe the components that comprise our solution – TWISTED. We start withthe description of the simulated environment in Section 4.1, we follow with the description of the al-gorithmic components in Sections 4.2, 4.3, and 4.4, and finally describe our data collection method-ology 4.5. See Figure 2 for an overview.4.1 Simulated EnvironmentThe environment we used to learn and test TWISTED was created by the free, open-source sim-ulation environment of Mujoco [37]. It includes the default rope of Mujoco and the end-effectormoving the rope. We used a free-moving end-effector to focus on the complexity of tying knots,ignoring the additional complexity of controlling a robot manipulator2. It is crucial to mention that2Although non-trivial, we expect that common motion planning solutions could be utilized in order to bridgethe gap from a free-moving end-effector to a complete robotic manipulator.3during planning, we run actions in the simulation itself, meaning that both evaluation and search usethe same Mujoco environment (i.e., the search acts with a perfect world model).4.2 Planning AlgorithmTWISTED3is best summarized as an iterative algorithm that given an initial rope configuration qinitand goal state sgrepeats three steps: (1) starts searching from a known reachable configuration,(2) plans a high level trajectory whose states are in S, and actions in AR, and (3) uses a low-level planner to follow subsequent states in the selected high-level plan. The iterative process ofTWISTED repeats until sgis reached (success) or a pre-specified timeout expires (terminating infailure). See Algorithm 1.Algorithm 1 TWISTED algorithmInput qinitInitial configuration state and sgtopo-logical goal stateOutput Low-level plan if found1:init:T,P ▷seedata structures2:sinit=Top(qinit)3:populate Pwith plans from sinit4:while Not timeout do5: sslct=SelectTopologicalState ()6: Pslct=SelectPlan (sslct)7: qslct=SelectConfig (sslct)8: PlanFound =FollowPlan (qslct, Pslct)9: ifPlanFound then10: ReturnPlan11: else12: RandomExpand ()13: end if14:end whileData structures : we maintain two data struc-tures, a tree of known reachable configu-rations and a set of high-level plans. Theknown reachable configurations , is a tree Twhose vertices are configurations of the ropewith their corresponding topological states(q,Top(q))and the edges are low-level ac-tions in C. Initially, Tcontains only aroot node - the rope’s initial configurationqinitand its topological state sinit. We alsomaintain a list of high level plans ,P={Pi= (s0, s1, . . . s li=sg)}ifrom cur-rently reachable topological states (see Sec-tion 4.3). When a topological state s′is dis-covered for the first time by the low-levelplanner (Section 4.3), we run the high-levelplanner from s′and store the results into P.Plan selection: At the start of each iteration,we need to select a plan to execute from Pand a configuration to start executing the plan from. Onenaive heuristic is to select a random configuration from the reachable configurations. However, dueto the problem’s sparsity, we observe that during the search, configurations in Twith more crossesare exponentially fewer than those with fewer crosses. We thus seek an approach that promotes con-figurations corresponding to topological states with higher crosses. We therefore run the followingthree sub-procedures in sequence:SelectTopologicalState ():identifies the reachable topological states in Tthat have a high-levelplan to the goal. Samples one such topological state saccording to one of two heuristics; random ,which is the uniform distribution4, and prioritize-crosses , which assigns s∈Sprobability that isproportional to Cross (s)(i.e. prefers topological states with more crosses – motivating to searchdeeper than the random heuristic).SelectPlan (s):a high-level plan (sequence of topological states) P=s, s1, . . . s l=gis randomlyselected from all the high-level plans that start in s.SelectConfiguration (s):randomly select a configuration from all configurations belonging to s.Plan execution: Next, in FollowPlan , we follow the high level plan P=s0, s1, . . . s l, wheres0=sandsl=sg, starting in s, and incrementally try to reach si>0until sgis reached. Totransition from sitosi+1, we use the low-level planner (Section 4.3) that uses the learned inversemodel (Section 4.4) to predict curves. The low-level planner applies multiple curves {cj}jto thecurrent configuration qi. Let q′j=f(qi, cj), and s′j=Top(q′j). If the transition for cjreaches aconfiguration with more crosses, we add this information to T, and note that it could be the case that3https://github.com/matansudry/twisted4Even the random heuristic prioritizes complex topological states, as sampling a random topological stateinduces a different distribution than sampling from all reachable configurations in Tdirectly.4s′j̸=si+1because the inverse model is not perfect. Nevertheless, this is an executable low-levelaction; thus, we add it to our search tree.Completeness guarantee: Finally, for completeness of the algorithm, after every iteration ofTWISTED, with probability p(hyper-parameter, with a default value of 0.05), we sample a ran-dom reachable configuration q∈T, execute k= 100 random actions and add them to Tif theaction transitions to a topological state with a greater or equal number of crosses (same conditionsas in the “plan execution” above). This ensures that given enough time, our algorithm is guaranteedto find a solution. We denote this sub-routine as RandomExpand .4.3 Planning and SearchTWISTED is composed of two levels of planning, high-level and low-level planning, that are calledas sub-procedures by the algorithm. We now describe the two planners with their states and actions.High-level planner: with a state space Sand Reidemeister moves ARas actions, finds all pathsfrom currently reachable topological states s∈T(not necessarily sinitial ) tosg∈Susing Breadth-first search (BFS). Our BFS prunes new states s′with Cross (s′)>Cross (sg), and returns a set(possibly empty) of all paths that start in sand terminate in sg.Low-level Planner: Given sands′two consecutive topological states in the high-level plan, wesearch for a curve c∈Cthat traverses from stos′. To successfully and efficiently find such a curve,we utilize our inverse model (Section 4.4) and generate curves {ci∈C}K(K= 6). If any of thenewly found topological states are s′, we return success (if more than one action succeeds we usethe first one found), and the plan execution will move to the next topological state in the high-levelpath. Otherwise, we return failure, and the iterative process of TWISTED repeats.4.4 Inverse modelAn action generator is crucial in knot-tying as the proportion of curves that transition the rope to agiven topological state could be extremely small (See Section 5.1 where we show the data collectiondifficulties). This makes it unlikely that a small set of randomly selected curves could be foundto satisfy the required transition between topological states. Thus, we trained an inverse model togenerate action candidates that are likely to satisfy the required transition. The inverse model isan auto-regressive model [38] predicts elements as follows: link to pickup, the height of the curvezmax, destination X position, and destination Y position. The link is a categorical and modeledas a multinomial distribution, and the other elements are continuous and modeled with a Normaldistribution. Every element is predicted with an independent sub-network, whose inputs are: (1)the current configuration, (2) the current (x, y, z )coordinates of each of the Llinks, (3) the nexttopological state s′, and (4) all the elements before the current element (e.g. zmaxgets the link indexas input). See Figure 5 in appendix section 9.4.Training: we collected data generated from random actions to train the inverse model (see Sec-tion 4.5). The data Dcontains transitions (s, s′, q, c),sands′are current and following topologicalstates, qis the current configuration, and cis the curve taken. Since the data collection is time-consuming, we follow previous work of Yan et al. [3] and apply the Mirror and Reverse augmenta-tions to our data.5We train the model via a maximum likelihood objective on D(predicting c).Inference: during inference, we follow the standard ancestral sampling scheme for auto-regressivemodels [38]; we predict a distribution for every element, sample from it, and feed the result to predictthe next element in the sequence.4.5 Data collectionTo train the inverse model, we must collect data that represents movements typically encounteredwhen tying knots. The problem, however, is that without a controller that knows how to tie knots,nor human demonstrations, it is not clear how to collect such data. Initially, we tried to collectrollouts simply by executing random walks of curves. However, in doing so, we found a very low5In Yan et al. [3] these augmentations were applied over manual demonstrations. In our work, we applythem on randomly collected data.5number of topological states with three crosses (only 27 states per CPU core per hour ), demonstrat-ing that applying random actions to the rope typically does not lead to knot-like configurations. We,therefore, designed a collection scheme that selectively resets the environment. Using our scheme,we collected 537 successful transitions per hour per CPU core. We used that to collect a data set of1,670,000 data points. For full details see Appendix 9.3.5 ExperimentsThe experiments aim to address the following items: (1) How sensitive is knot-tying planning tothe action space, and is a continuous action space necessary? (2) Comparison of TWISTED withbaselines (3) How sensitive is TWISTED to changes in the physical simulation? (4) How well doesTWISTED generalize to unseen knots?5.1 What makes knot-tying difficult?One difficulty of our knot-tying problem is that it requires very accurate actions to solve. To demon-strate this, we verify that even a fine discretization of the problem leads to significantly differentoutcomes. In this experiment, we measure sensitivity to discritization of curves, i.e. test if using adiscretized curve reaches the same topological state as the next topological state (obtained by exe-cuting the original non-discretized curve). Formally, given a curve, c= (i, zmax, x, y)∈C, whichincludes three continuous elements ( zmax,x, and y), we convert it to a discrete curve where eachelement is rounded. zmax, is discritized in steps of 0.001, and x and y in steps of 0.01. Notice thatthe size of the discretized curve space is 21×70×100×100 = 14 ,700,000,already rather large.We measure the accuracy of the resulting topological states. If the accuracy is high, there is littledifference in discretizing the action space, suggesting that knot tying could be solved using off-the-shelf discrete planners [39, 40]. We ran over 600k data points of transitions from topological statesof two crosses to topological states of three crosses, and got an accuracy of 82%. These results showknot tying is sensitive to discretization as very small changes in the actions can lead to differenttopological states. As the space of available discretized actions is already rather large (larger actionspaces would make planning even more difficult) we conclude that discretization of the action spaceis not a suitable approach to the knot-tying problem.5.2 Success Rate of Different AlgorithmsIn this experiment, we compare several algorithms, including TWISTED and its ablations.Low level only: We modify TWISTED to not use any high-level information. Essentially, usingrandom search over configurations with curves as actions. As there is no notion of topologicalstates, there is no way to use the inverse model here. Instead, we sample random curves . It isimportant to notice that for this baseline, the search does not get feedback along the trajectory (inTWISTED, we do, for instance, count the number of crosses). We use this baseline to demonstratehow crucial high-level information is for knot tying.Low+high level: We modified TWISTED not to use the inverse model. Instead, we sample randomactions replacing those suggested by the inverse model. Unlike the previous baseline, we do tryto follow a high-level plan. This baseline demonstrates the trade-off between intensive but moreaccurate action prediction (inverse model) vs. an approach of guessing many random actions andseeing if any suffice.SAC+HER: In this baseline, we learn a stochastic policy using the Soft Actor-Critic (SAC)[41],with Hindsight Experience Replay (HER)[42], and after training we replace our inverse modelwith the policy. The objective of this baseline is to establish the performance of model-free RLmethods and the challenging problem of knot-tying.TWISTED, RND: TWISTED using the random heuristic for topological state selection.TWISTED, CRS: TWISTED using the prioritize-crosses heuristic for topological state selection.Evaluation protocol: To evaluate the performance on different difficulty levels, we split our col-lected data D(Section 4.5) into three levels: easy ,medium , and hard . To classify the problems(topological goal states), we counted the frequency each topological state has been recorded. Easy ,6medium andhard are the 33%, 66%, and 100% percentiles appearing the most in the data. Fromevery class of problems, we sampled ten representatives.Results: None of the algorithms solve medium orhard in the time limit of 1800 seconds, demon-strating the hardness of the knot-tying problem. Figure 3 (a) shows the number of solved tasks vs.the running time for easy problems. First, observe that ”low level only” is barely able to solve twoout of the ten problems. This validates our earlier hypothesis in Section 4.5 that the problem is toosparse to solve without prior knowledge of the problem structure (such as our high-level search).Surprisingly, the model-free RL baseline is barely better than the random search. We observed thatduring training, it did manage to consistently solve all 1-cross problems, but already for 2-crossproblems success rate was near zero. This suggests that knot-tying is a hard task to learn end-to-endwithout proper domain knowledge. We hypothesize that the main reason this baseline fails is dueto the discrete nature of the topological states – in such cases algorithms cannot generalize between“similar” states because as categorical variables, there is no notion of similarity, only the relation ofequality. Even a well utilized exploration method such as HER does little to mitigate this problem,because it can only reinforce patterns for goals we reach, and as we saw when acting randomly,like RL agents do at the beginning of training, there is little chance to advance to topological stateswith many crosses (see Section 4.5). Finally, regarding the baselines, we see that because ”low +high level” is so inferior to the full TWISTED versions, the inverse model is a valuable componentof our full solution. Comparing “TWISTED, RND” and “TWISTED, CRS”, we observe that theresults are not conclusive. To identify the better model, we sampled 15 additional easy goals toget a statically significant separation on which is the better variant of TWISTED. The “TWISTED,CRS” solved a total of 24/25, and the “TWISTED, RND” solved only 19/25. Under a Z-test the“TWISTED, CRS” has a statistical significance of being better than the “TWISTED, RND” (usingα-level of 0.05), showing that planning deeper and utilizing prior knowledge (number of crosses) ispreferable. Therefore, in our next experiments, we use the “TWISTED, CRS” version.5.3 Sensitivity AnalysisTo motivate the usage of TWISTED in real-world applications we test what happens if the modelof the world in the planning computation is mismatched with the evaluation environment . In ourexperiments, we focus on the friction coefficient. First, we validate that friction indeed significantlyimpacts rope tying. To measure this, we compare the next topological state observed when applyingthe same action from the same low-level state, under different friction coefficients. We evaluatedover 600,000 actions, and only 82% curves had the same topological state as the original frictionvalue. Next, we evaluate the performance of TWISTED trained with a single friction coefficient onsimulated environments with different frictions of the rope.Variants: 100% friction denotes the default Mujoco friction, and the one we use for TWISTED.The95% and105% variants, denote decreasing and increasing the friction by 5%.Results: The performance of TWISTED is well-maintained with the different friction coefficients(Figure 3 (b)). This asserts that TWISTED can handle some variations in the environment’s physicssuch as friction (even though the resulting trajectories might be different than the original ones).5.4 Generalization to Unseen Topological StatesThe number of available topological states for states with 3 or 4 crosses is above 500 and almost8000 correspondingly. Naturally our data does not contains all of them because it is hard to sampletopological states of higher crosses (see Section 4.5 for analysis). Thus, we require our algorithmto handle unseen topological states. We evaluate whether TWISTED can tie knots where the goalstate was not represented in D. For this, we take topological states with three crosses not seen inD, and topological states with four crosses. In these out-of-distribution cases we expect the inversemodel to contribute less than in well represented states, and we expect that the planning componentsin TWISTED will compensate for this distribution shift. For this reason, we extend search time bya factor of 4 ×. Results are shown in figure 3 (c). Figure 3 (c) shows that TWISTED solved two outof eight problems with unseen three cross states. Those states are harder to reach because they werenever seen in Dduring data collection. In figure 3 (c) we see that TWISTED solved three out of tenproblems with unseen states with four crosses. We recall that our data contain only one, two, and7(a) (b) (c)Figure 3: Anytime Success Rates for Different Settings. The X-axis is the algorithm run time andthe Y-axis is the number of solved problems. (a) Differnt knot tying algorithms, (b) ”TWISTED,CRS” with different ropes and different friction to knot tying, and (c) ”TWISTED, CRS” on unseenstates with three crosses and four crosses, with eight and ten problems respectivelyAlgorithm Solved problems Friction Solved problems Goal states Solved problemsLow level only 2 95% 9 3 crosses 3/10Low+high level 4 100% 10 4 crosses 2/8SAC+HER 3 105% 9TWISTED, RND 10TWISTED, CRS 10Table 1: Experiments Summary: (a) Success Rate of Different Algorithms, (b) Sensitivity Anal-ysis, and (c) Generalization to Unseen Topological Statesthree crosses, and therefore these results show that TWISTED is not only memorizing the data, butcan generalize to some degree to unseen goal states.6 LimitationsOur work has several important limitations that need to be addressed in order to make it morepractical and useful in real-world applications. First, simulation accelerates the training process butintroduces a sim2real gap between the simulation and real-world performance. This gap should betested on a real robot using real ropes. Furthermore, in this work we also simplified the problem ;we control a free-moving end-effector instead of controlling a manipulator (which might make somecurves unfeasible in some scenarios), and we get a perfect representation of the rope, where in realitywe would need first to estimate one. Finally, as our experiments demonstrate, TWISTED has shownbetter performance on frequent data from the easy problems, but its performance decreases whentrying to solve more rare or unseen goals from medium andhard .7 OutlookWe presented TWISTED – a hierarchical planning algorithm for knot tying, that relies on knot-theory and a learned inverse model to automatically solve problems that previously required accessto human demonstrations. TWISTED outperforms various baselines, including a model-free deepRL agent, and we demonstrated robustness to simulation parameters such as friction, and general-ization to problems not seen during training (even to problems of greater complexity). We see thisas an important step towards general 1D object manipulation, and to the best of our knowledge, thisis the first work that manages to tie knots using random data instead of demonstrations.An exciting area for improvement would be to utilize TWISTED as a demonstrations provider togenerate “valuable” data for an off-policy RL algorithm, either by distilling the planner into a pol-icy [43] or by combining RL with imitation learning [44]. This could be the missing prior knowledgethat RL methods lack for knot-tying tasks (cf. Section 5). A different interesting future directionmight be to improve the data collection process to seek novel states instead of relying on randomactions. Collecting data from states where the system is less capable, could ultimately provide dataof higher quality and improve the performance of our learned inverse model. Finally, it would beinteresting to test TWISTED on a real system overcoming challenges such as estimating the ropeconfiguration and executing precise rope manipulation actions.88 AcknowledgementSupported by a grant from the Israeli Planning and Budgeting Committee.This work received funding from the European Union (ERC, Bayes-RL, Project Number101041250). Views and opinions expressed are however those of the author(s) only and do not nec-essarily reflect those of the European Union or the European Research Council Executive Agency.Neither the European Union nor the granting authority can be held responsible for them.References[1] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone, J. E. Gon-zalez, and K. Goldberg. Learning rope manipulation policies using dense object descriptorstrained on synthetic depth data. In 2020 IEEE International Conference on Robotics and Au-tomation (ICRA) , pages 9411–9418. IEEE, 2020.[2] J. Van Den Berg, S. Miller, D. Duckworth, H. Hu, A. Wan, X.-Y . Fu, K. Goldberg, andP. Abbeel. Superhuman performance of surgical tasks by robots using iterative learning fromhuman-guided demonstrations. In 2010 IEEE International Conference on Robotics and Au-tomation , pages 2074–2081. IEEE, 2010.[3] M. Yan, G. Li, Y . Zhu, and J. Bohg. Learning topological motion primitives for knot planning.In2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages9457–9464. IEEE, 2020.[4] H. Mayer, F. Gomez, D. Wierstra, I. Nagy, A. Knoll, and J. Schmidhuber. A system for roboticheart surgery that learns to tie knots using recurrent neural networks. Advanced Robotics , 22(13-14):1521–1537, 2008.[5] Y . She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson. Cable manipulation witha tactile-reactive gripper. The International Journal of Robotics Research , 40(12-14):1385–1401, 2021.[6] J. Schulman, J. Ho, C. Lee, and P. Abbeel. Learning from demonstrations through the use ofnon-rigid registration. In Robotics Research , pages 339–354. Springer, 2016.[7] Y . Wu, W. Yan, T. Kurutach, L. Pinto, and P. Abbeel. Learning to manipulate deformableobjects without demonstrations. In 16th Robotics: Science and Systems, RSS 2020 . MIT PressJournals, 2020.[8] M. Yu, H. Zhong, and X. Li. Shape control of deformable linear objects with offline and onlinelearning of local linear deformation models. In 2022 International Conference on Roboticsand Automation (ICRA) , pages 1337–1343. IEEE, 2022.[9] V . Lim, H. Huang, L. Y . Chen, J. Wang, J. Ichnowski, D. Seita, M. Laskey, and K. Goldberg.Real2sim2real: Self-supervised learning of physical single-step dynamic actions for planarrobot casting. In 2022 International Conference on Robotics and Automation (ICRA) , pages8282–8289. IEEE, 2022.[10] C. Chi, B. Burchfiel, E. Cousineau, S. Feng, and S. Song. Iterative residual policy: for goal-conditioned dynamic manipulation of deformable objects. arXiv preprint arXiv:2203.00663 ,2022.[11] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto. Learning predictive representations fordeformable objects using contrastive estimation. In Conference on Robot Learning , pages564–574. PMLR, 2021.[12] A. Wang, T. Kurutach, K. Liu, P. Abbeel, and A. Tamar. Learning robotic manipulation throughvisual planning and acting. In Robotics: science and systems , 2019.9[13] Y . Wi, P. Florence, A. Zeng, and N. Fazeli. Virdo: Visio-tactile implicit representations ofdeformable objects. In 2022 International Conference on Robotics and Automation (ICRA) ,pages 3583–3590. IEEE, 2022.[14] A. Ganapathi, P. Sundaresan, B. Thananjeyan, A. Balakrishna, D. Seita, J. Grannen, M. Hwang,R. Hoque, J. E. Gonzalez, N. Jamali, et al. Learning dense visual correspondences in simulationto smooth and fold real fabrics. In 2021 IEEE International Conference on Robotics andAutomation (ICRA) , pages 11515–11522. IEEE, 2021.[15] J. Grannen, P. Sundaresan, B. Thananjeyan, J. Ichnowski, A. Balakrishna, V . Viswanath,M. Laskey, J. Gonzalez, and K. Goldberg. Untangling dense knots by learning task-relevantkeypoints. In Conference on Robot Learning , pages 782–800. PMLR, 2021.[16] L. Yen-Chen, P. Florence, J. T. Barron, T.-Y . Lin, A. Rodriguez, and P. Isola. Nerf-supervision:Learning dense object descriptors from neural radiance fields. In 2022 International Confer-ence on Robotics and Automation (ICRA) , pages 6496–6503. IEEE, 2022.[17] M. Yan, Y . Zhu, N. Jin, and J. Bohg. Self-supervised learning of state estimation for ma-nipulating deformable linear objects. IEEE robotics and automation letters , 5(2):2372–2379,2020.[18] X. Ma, D. Hsu, and W. S. Lee. Learning latent graph dynamics for visual manipulation ofdeformable objects. In 2022 International Conference on Robotics and Automation (ICRA) ,pages 8266–8273. IEEE, 2022.[19] B. Lu, H. K. Chu, and L. Cheng. Dynamic trajectory planning for robotic knot tying. In2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) , pages180–185. IEEE, 2016.[20] T. Osa, N. Sugita, and M. Mitsuishi. Online trajectory planning and force control for au-tomation of surgical tasks. IEEE Transactions on Automation Science and Engineering , 15(2):675–691, 2017.[21] B. Lu, H. K. Chu, and L. Cheng. Robotic knot tying through a spatial trajectory with a vi-sual servoing system. In 2017 IEEE/RSJ International Conference on Intelligent Robots andSystems (IROS) , pages 5710–5716. IEEE, 2017.[22] S. Jin, C. Wang, and M. Tomizuka. Robust deformation model approximation for robotic cablemanipulation. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS) , pages 6586–6593. IEEE, 2019.[23] J. Takamatsu, T. Morita, K. Ogawara, H. Kimura, and K. Ikeuchi. Representation for knot-tying tasks. IEEE Transactions on Robotics , 22(1):65–78, 2006.[24] T. Morita, J. Takamatsu, K. Ogawara, H. Kimura, and K. Ikeuchi. Knot planning from ob-servation. In 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422) , volume 3, pages 3887–3892. IEEE, 2003.[25] H. Wakamatsu, A. Tsumaya, E. Arai, and S. Hirai. Planning of one-handed knotting/ravelingmanipulation of linear objects. In IEEE International Conference on Robotics and Automation,2004. Proceedings. ICRA’04. 2004 , volume 2, pages 1719–1725. IEEE, 2004.[26] A. Nair, D. Chen, P. Agrawal, P. Isola, P. Abbeel, J. Malik, and S. Levine. Combining self-supervised learning and imitation for vision-based rope manipulation. In 2017 IEEE interna-tional conference on robotics and automation (ICRA) , pages 2146–2153. IEEE, 2017.[27] Y . Teng, H. Lu, Y . Li, T. Kamiya, Y . Nakatoh, S. Serikawa, and P. Gao. Multidimensionaldeformable object manipulation based on dn-transporter networks. IEEE Transactions on In-telligent Transportation Systems , 2022.10[28] K. Reidemeister. Knot theory . BCS Associates, 1983.[29] J. Matas, S. James, and A. J. Davison. Sim-to-real reinforcement learning for deformableobject manipulation. In Conference on Robot Learning , pages 734–743. PMLR, 2018.[30] X. Lin, Y . Wang, J. Olkin, and D. Held. Softgym: Benchmarking deep reinforcement learningfor deformable object manipulation. In Conference on Robot Learning , pages 432–448. PMLR,2021.[31] H. Han, G. Paul, and T. Matsubara. Model-based reinforcement learning approach for de-formable linear object manipulation. In 2017 13th IEEE Conference on Automation Scienceand Engineering (CASE) , pages 750–755. IEEE, 2017.[32] Y . Deng, C. Xia, X. Wang, and L. Chen. Deep reinforcement learning based on local gnn forgoal-conditioned deformable object rearranging. In 2022 IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS) , pages 1131–1138. IEEE, 2022.[33] V . Viswanath, J. Grannen, P. Sundaresan, B. Thananjeyan, A. Balakrishna, E. Novoseller,J. Ichnowski, M. Laskey, J. E. Gonzalez, and K. Goldberg. Disentangling dense multi-cableknots. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ,pages 3731–3738. IEEE, 2021.[34] K. Shivakumar, V . Viswanath, A. Gu, Y . Avigal, J. Kerr, J. Ichnowski, R. Cheng, T. Kollar, andK. Goldberg. Sgtm 2.0: Autonomously untangling long cables using interactive perception.arXiv preprint arXiv:2209.13706 , 2022.[35] V . Viswanath, K. Shivakumar, J. Kerr, B. Thananjeyan, E. Novoseller, J. Ichnowski, A. Es-contrela, M. Laskey, J. E. Gonzalez, and K. Goldberg. Autonomously untangling long cables.arXiv preprint arXiv:2207.07813 , 2022.[36] P. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, J. Ichnowski, E. Novoseller,M. Hwang, M. Laskey, J. E. Gonzalez, and K. Goldberg. Untangling dense non-planar knotsby learning manipulation features and recovery policies. arXiv preprint arXiv:2107.08942 ,2021.[37] E. Todorov, T. Erez, and Y . Tassa. Mujoco: A physics engine for model-based control. In 2012IEEE/RSJ international conference on intelligent robots and systems , pages 5026–5033. IEEE,2012.[38] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep autoregressive networks.InInternational Conference on Machine Learning , pages 1242–1250. PMLR, 2014.[39] G. M. J. Chaslot, M. H. Winands, H. J. v. d. Herik, J. W. Uiterwijk, and B. Bouzy. Progressivestrategies for monte-carlo tree search. New Mathematics and Natural Computation , 4(03):343–357, 2008.[40] H. Finnsson and Y . Bj ̈ornsson. Simulation-based approach to general game playing. In Aaai ,volume 8, pages 259–264, 2008.[41] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropydeep reinforcement learning with a stochastic actor. In International conference on machinelearning , pages 1861–1870. PMLR, 2018.[42] M. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. To-bin, O. Pieter Abbeel, and W. Zaremba. Hindsight experience replay. Advances in neuralinformation processing systems , 30, 2017.[43] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,I. Antonoglou, V . Panneershelvam, M. Lanctot, et al. Mastering the game of go with deepneural networks and tree search. nature , 529(7587):484–489, 2016.11[44] A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel. Overcoming explorationin reinforcement learning with demonstrations. In 2018 IEEE international conference onrobotics and automation (ICRA) , pages 6292–6299. IEEE, 2018.129 Appendix9.1 P-data, an abstract representation for rope statesA common way to abstract the state space in knot tying is using the P-data representation [24]. TheP-data representation translates a rope configuration to a matrix of discrete values that depends onthe number of link intersections. The P-data algorithm stages are: (1) project the 3D rope onto a 2Don the horizontal plane. (2) Select rope direction by defining the head and tail of the rope. (3) Movefrom head to tail and count the number of intersections along the path, starting from 1 to N. Thoseintersections are also called crosses. Finally, (4) each intersection gets over/under value based onwhich segment is over the other in the height dimension and also gets a sign plus/minus. The signdefined assign =− →lover×− →lunder|− →lover×− →lunder|·− →ez,where ezis the unit normal of the horizontal plane, and lover andlunder are the two strands direc-tional vectors. Examples of P-data of projected knots in Figure 1.9.2 Reidemeister MovesThe various Reidemeister Moves are depicted in figure 4.9.3 Collection processOur data collection process is split into two steps, the first is random sampling with resets and thesecond is noisy re-sampling.The random sampling with resets works as follows: We maintain a set of configurations wehave already seen during data collection and their respective number of crosses, i.e. DQ={(q,Cross (Top(q))}q∈D. For every data collection iteration t, we load the simulation with a con-figuration sampled uniformly from qt∼DQ, take a random 100 curves cit(fori∈[1. . .100]),and reach a new configuration qit+1with a topological state sit+1. We sample the curve parametersuniformly: the link is discrete and sampled from [1, 21]. The other three, x,y, and zmax, are con-tinuous variables sampled from [-0.5, 0.5], [-0.5, 0.5], and [0.001, 0.07], respectively. If the numberof crosses in Cross (sit+1)>Cross (st)the transition (qt, st, cit, qit+1, sit+1)is added to D, and theconfiguration qit+1is added to DQ.Innoisy re-sampling , the goal is to increase the amount of data using previously collected data. First,we sample transitions from our data set (thus they are “interesting”, i.e. move the agent to highernumber of crosses), add noise to the action, and obtain a new transition based on the same startingconfiguration and the modified action. The noise distribution is uniform and characterized by fourparameters that offset the current action: The link offset is sampled from {−1,1}. The x,y, andzmax offsets are sampled from [-0.05, 0.05] but are clipped to stay within the limits of [0.001, 0.07]forzmax and [-0.5, 0.5] for xandy. See Algorithm 2 for more information.9.4 Inverse modelOur inverse model architecture is detailed in figure 5.(a) (b) (c)Figure 4: (a) Reidemeister move one, (b) Reidemeister move two and (c) cross.13Algorithm 2 Data Collection1:while time < TimeBudgetStageOne do2: state =GetStateFromData () ▷Select state randomly3: PotentialActions =SampleRandomAction ()4: foraction in PotentialActions do5: NextState =RunActionInSim (action )6: ifV alidaStete (NextState )then ▷Checks if the action increases number of crosses7: SaveAction (action )8: else9: ResetState ()10: end if11: end for12:end while13:while time < TimeBudgetStageTwo do14: state, action =GetStateAndActionFromData () ▷Select tuple randomly15: noise =SampleActionNoise ()16: NewAction =action +noise17: NextState =RunActionInSim (NewAction )18: ifV alidStete (NextState )then ▷Checks if the action increases number of crosses19: SaveAction (action )20: else21: ResetState ()22: end if23:end whileFigure 5: Inverse model - Auto-regressive Stochastic Network. The network predicts an action in anauto-regressive manner: first is predicts the link index l∈[1, L], then the height of the curve zmax,finally it predicts the xandycoordinates of the curve. All predictions are stochastic (Multinominalfor link index, and Gaussian otherwise). Besides the previous elements, the input of each elementincludes the current configuration qt, the next topological state st+1, and the link positions of all therope links. The weights of the sub-components are not shared.14 |
7TYeO2XVqI | SayTap: Language to Quadrupedal LocomotionYujin Tangyujintang@google.comGoogle DeepMindWenhao Yumagicmelon@google.comGoogle DeepMindJie Tanjietan@google.comGoogle DeepMindHeiga Zenheigazen@google.comGoogle DeepMindAleksandra Faustsandrafaust@google.comGoogle DeepMindTatsuya Haradaharada@mi.t.u-tokyo.ac.jpThe University of TokyoAbstract: Large language models (LLMs) have demonstrated the potential toperformhigh-levelplanning. Yet,itremainsachallengeforLLMstocomprehendlow-level commands, such as joint angle targets or motor torques. This paperproposes an approach to use foot contact patterns as an interface that bridgeshuman commands in natural language and a locomotion controller that outputsthese low-level commands. This results in an interactive system for quadrupedalrobots that allows the users to craft diverse locomotion behaviors flexibly. Wecontribute an LLM prompt design, a reward function, and a method to expose thecontrollertothefeasibledistributionofcontactpatterns. Theresultsareacontrollercapable of achieving diverse locomotion patterns that can be transferred to realrobot hardware. Compared with other design choices, the proposed approachenjoys more than 50%success rate in predicting the correct contact patterns andcansolve 10moretasksoutofatotalof 30tasks. ( https://saytap.github.io )Keywords: Large language model (LLM), Quadrupedal robots, Locomotion1 IntroductionSimple and effective interaction between human and quadrupedal robots paves the way towardscreating intelligent and capable helper robots, forging a future where technology enhances ourlives in ways beyond our imagination [1, 2, 3]. Key to such human-robot interaction system isenabling quadrupedal robots to respond to natural language instructions as language is one of themostimportantcommunicationchannelsforhumanbeings. RecentdevelopmentsinLargeLanguageModels(LLMs)haveengenderedaspectrumofapplicationsthatwereonceconsideredunachievable,includingvirtualassistance[4],codegeneration[5],translation[6],andlogicalreasoning[7],fueledby the proficiency of LLMs to ingest an enormous amount of historical data, to adapt in-context tonoveltaskswithfewexamples,andtounderstandandinteractwithuserintentionsthroughanaturallanguage interface.TheburgeoningsuccessofLLMshasalsokindledinterestwithintheroboticsresearchercommunity,with an aim to develop interactive and capable systems for physical robots [8, 9, 10, 11, 12, 13].ResearchershavedemonstratedthepotentialofusingLLMstoperformhigh-levelplanning[8,9],androbot code writing [11, 13]. Nevertheless, unlike text generation where LLMs directly interpret theatomic elements—tokens—it often proves challenging for LLMs to comprehend low-level roboticcommands such as joint angle targets or motor torques, especially for inherently unstable leggedrobots necessitating high-frequency control signals. Consequently, most existing work presume theprovision of high-level APIs for LLMs to dictate robot behaviour, inherently limiting the system’sexpressive capabilities.We address this limitation by using foot contact patterns as an interface that bridges human instruc-tions in natural language and low-level commands. The result is an interactive system for leggedrobots, particularly quadrupedal robots, that allows users to craft diverse locomotion behavioursflexibly. Central to the proposed approach is the observation that patterns of feet establishing andbreaking contacts with the ground often govern the final locomotion behavior for legged robots due7th Conference on Robot Learning (CoRL 2023), Atlanta, USA.Trotting“Trot forward slowly” → “Good news, we are going to a picnic this weekend!”BoundingFigure 1: Illustration of the results on a physical quadrupedal robot. We show two test commandsat the top, and the snapshots of the robot in the top row of the figure. The row in the middle showsthe desired contact patterns translated from the commands by an LLM (the pattern in between thecommands requests the robot to put all feet on the ground and stand still), and the bottom rowgives the realized patterns. The proposed approach allows the robot to take both simple and directinstructions (e.g., “Trot forward slowly”) as well as vague human commands (e.g., “Good news, weare going to a picnic this weekend!”) in natural language and react accordingly.to the heavy reliance of quadruped locomotion on environmental contact. Thus, a contact pattern,describingthecontactestablishingandbreakingtimingsforeachlegs,isacompactandflexibleinter-facetoauthorlocomotionbehaviorsforleggedrobots. Toleveragethisnewinterfaceforcontrollingquadrupedrobots,wefirstdevelopanLLM-basedapproachtogeneratecontactpatterns,representedby‘0’sand‘1’s,fromuserinstructions. DespitethatLLMsaretrainedwithmostlynaturallanguagedataset,wefindthatwithproperpromptingandin-contextlearning,theycanproducecontactpatternsto represent diverse quadruped motions. We then develop a Deep Reinforcement Learning (DRL)based approach to generate robot actions given a desired contact pattern. We demonstrate that bydesigning a reward structure that only concerns about contact timing and exposing the policy tothe right distribution of contact patterns, we can obtain a controller capable of achieving diverselocomotion patterns that can be transferred to the real robot hardware.We evaluate the proposed approach on a physical quadruped robot, Unitree A1 [14], where itsuccessfully controls the robot to follow diverse and challenging instructions from users (Figure 1).We benchmark the proposed approach against two baselines: (1) using discrete gaits, and (2) usingsinusoidal functions as interface. Evaluations on 30tasks demonstrate that the proposed approachcan achieve 50%higher success rate in predicting the correct contact pattern and can solve 10moretasks than the baselines.The key contributions of this paper are: i) A novel interface of contact pattern for harnessingknowledge from LLMs to flexibly and interactive control quadruped robots; ii) A pipeline to teachLLMstogeneratecomplexcontactpatternsfromuserinstructions;iii)ADRL-basedmethodtotraina low-level controller that realizes diverse contact patterns on real quadruped robots. Finally, ourproposalalsoholdsintriguingpotentialforbothhuman-robotinteractionresearchersandtheroboticlocomotion community, inviting a compelling cross-disciplinary dialogue and collaboration.2 Related Work2.1 Language to robot controlThere is a rich literature in leveraging language to modulate the behavior of robots [15, 10, 8, 16,17,18,19,20]. Earlierworkinthisdirectiontypicallyassumesstructuredtexttemplatestotranslatelanguage to robot commands [17, 19] or leveraged natural language processing (NLP) tools such as2theparsetreetoassistextractingtheconstraintsfromuserinput,followedbytrajectoryoptimizationtoobtainrobotmotion[20]. Thoughtheseapproachesdemonstratecomplexroboticstasks,theyusuallydo not handle unstructured natural language input. To mitigate this issue, recent work leverages theadvancements in representation learning and deep learning to train language conditioned policiesthatmappedunstructurednaturallanguageinstructionstorobotactions[18,21,22,23]. Toestablishpropermappingsbetweennaturallanguageembeddingsandrobotactions,theseapproachesusuallyrequireasignificantamountofdemonstrationdatawithlanguagelabelsfortrainingthepolicy,whichis challenging to collect for diverse legged locomotion behaviors.Inspired by recent success in LLMs to perform diverse tasks [5, 6, 7], researchers in robotics havealso explored ideas to connect LLMs to robot commands [8, 9, 11, 12, 13, 24, 25]. For example,Ahn et al. [8] combined LLMs with a learned robot affordance function to pick the optimal pre-trained robot skills for completing long horizon tasks. To mitigate the requirement for pre-trainingindividual low-level skills, researchers also proposed to expand the low-level primitive skills to thefull expressiveness of code by tasking LLMs to write robot codes [11, 12, 13]. As LLMs cannotdirectly generate low-level robot motor commands such as joint targets, these approaches had todesignanintermediateinterfaceforconnectingLLMsandrobotcommands,suchashigh-levelplans[8, 9, 24], primitive skills [11, 12], and trajectories [25]. In this work, we identify foot contactpatternstobeanaturalandflexibleintermediateinterfaceforquadrupedalrobotlocomotionthatdonot require laborious design efforts.2.2 Locomotion controller for legged robotsTraining legged robots to exhibit complex contact patterns, especially gait patterns, has been ex-tensively studied by researchers in robotics, control, and machine learning. A common methodis to model the robot dynamics and perform receding horizon trajectory optimization, i.e., ModelPredictive Control (MPC), to follow desired contact patterns [26, 27, 28, 29, 30]. For quadrupedrobots, this led to a large variety of canonical locomotion gaits such as trotting [26], pacing [31],bounding[32],andgalloping[33],aswellasnonconventionalgaitsspecifiedbythedesiredcontacttiming or patterns [28, 30]. Despite the impressive results in these work, applying MPC to generatediverse locomotion behavior often requires careful design of reference motion for robot base andswing legs and high computational cost due to re-planning. Prior work have also explored usinglearning-based methods to author flexible locomotion gaits [34, 35, 36, 37, 38, 39, 40]. Some ofthese work combines learning and MPC-based methods to identify the optimal gait parameters fortasks [34, 35, 36]. Others directly train DRL policies for different locomotion gaits, either throughcarefulrewardfunctiondesign[37,40],open-loopcommandsextractedfrompriorknowledgeaboutgaits [38, 39] or encoding of a predefined family of locomotion strategies that solve training tasksin different ways[41]. This paper explores an alternative DRL-based method that relies on the sim-ple but flexible reward structure. Compared to the prior work, the proposed reward structure onlyconcerns about contact timing thus is more flexible in generating diverse locomotion behaviors.3 MethodThe core ideas of our approach include introducing desired foot contact patterns as a new interfacebetween human commands in natural language and the locomotion controller. The locomotioncontroller is required to not only complete the main task (e.g., following specified velocities), butalso to place the robot’s feet on the ground at the right time, such that the realized foot contactpatterns are as close as possible to the desired ones, Figure 2 gives an overview of the proposedsystem. To achieve this, the locomotion controller takes a desired foot contact pattern at each timestep as its input, in addition to the robot’s proprioceptive sensory data and task related inputs (e.g.,userspecifiedvelocitycommands). Attraining,arandomgeneratorcreatesthesedesiredfootcontactpatterns, while at test time a LLM translates them from human commands.In this paper, a desired foot contact pattern is defined by a cyclic sliding window of size !Fthatextractsthefourfeetgroundcontactflagsbetween C ̧1andC ̧!Ffromapatterntemplateandisofshape 4!F. A contact pattern template is a 4)matrix of ‘0’s and ‘1’s, with ‘0’s representingfeet in the air and ‘1’s for feet on the ground. From top to bottom, each row in the matrix gives thefoot contact patterns of the front left (FL), front right (FR), rear left (RL) and rear right (RR) feet.We demonstrate that the LLM is capable of mapping human commands into foot contact patterntemplates in specified formats accurately given properly designed prompts, even in cases when the3Shared flowContact Pattern Template“Trot in place, with the front right leg move twice as fast as other legs”Large Language ModelProprioceptive sensory dataDesired jointpositionsRandom Pattern GeneratorDesired gait type <latexit sha1_base64="7ho51pYdfOSEjnRb3iGzPhKUX2E=">AAAB6HicbZDJSgNBEIZr4hbHLerRS2MQPIWZgMtFDHjQYwJmgWQIPZ2apE3PQnePEEKewIsHRbzqQ/geXsS3sbMcNPGHho//r6Kryk8EV9pxvq3M0vLK6lp23d7Y3Nreye3u1VScSoZVFotYNnyqUPAIq5prgY1EIg19gXW/fzXO6/coFY+jWz1I0AtpN+IBZ1Qbq3LdzuWdgjMRWQR3BvnLD/sief+yy+3cZ6sTszTESDNBlWq6TqK9IZWaM4Eju5UqTCjr0y42DUY0ROUNJ4OOyJFxOiSIpXmRJhP3d8eQhkoNQt9UhlT31Hw2Nv/LmqkOzr0hj5JUY8SmHwWpIDom461Jh0tkWgwMUCa5mZWwHpWUaXMb2xzBnV95EWrFgntaOKm4+VIRpsrCARzCMbhwBiW4gTJUgQHCAzzBs3VnPVov1uu0NGPNevbhj6y3H+SMj/M=</latexit>G<latexit sha1_base64="7ho51pYdfOSEjnRb3iGzPhKUX2E=">AAAB6HicbZDJSgNBEIZr4hbHLerRS2MQPIWZgMtFDHjQYwJmgWQIPZ2apE3PQnePEEKewIsHRbzqQ/geXsS3sbMcNPGHho//r6Kryk8EV9pxvq3M0vLK6lp23d7Y3Nreye3u1VScSoZVFotYNnyqUPAIq5prgY1EIg19gXW/fzXO6/coFY+jWz1I0AtpN+IBZ1Qbq3LdzuWdgjMRWQR3BvnLD/sief+yy+3cZ6sTszTESDNBlWq6TqK9IZWaM4Eju5UqTCjr0y42DUY0ROUNJ4OOyJFxOiSIpXmRJhP3d8eQhkoNQt9UhlT31Hw2Nv/LmqkOzr0hj5JUY8SmHwWpIDom461Jh0tkWgwMUCa5mZWwHpWUaXMb2xzBnV95EWrFgntaOKm4+VIRpsrCARzCMbhwBiW4gTJUgQHCAzzBs3VnPVov1uu0NGPNevbhj6y3H+SMj/M=</latexit>GLocomotion Controller<latexit sha1_base64="JE1xCnOOnF7uMaIDx9MKXCVrIi8=">AAACGXicbVDLSgMxFM34rPVV69JNsAgVtMwIPpaFblxWsA/olCGTZmwwmRmSO8Uy9BvcufFX3Cgo4lJXfoh7M20Xaj0k5HDOveTe48eCa7DtT2tufmFxaTm3kl9dW9/YLGwVmzpKFGUNGolItX2imeAhawAHwdqxYkT6grX861rmtwZMaR6FlzCMWVeSq5AHnBIwklew3T6BdDDybrCrucSuJNBXMq0RGJUPnQN8aFeOD7BtTvY6+16hZFfsMfAscaakVC265a+nW7fuFd7dXkQTyUKggmjdcewYuilRwKlgo7ybaBYTek2uWMfQkEimu+l4sxHeM0oPB5EyNwQ8Vn92pERqPZS+qcwG13+9TPzP6yQQnHVTHsYJsJBOPgoSgSHCWUy4xxWjIIaGEKq4mRXTPlGEggkzb0Jw/q48S5pHFeekcnzhlKpHaIIc2kG7qIwcdIqq6BzVUQNRdIce0DN6se6tR+vVepuUzlnTnm30C9bHN1uToEU=</latexit>ˆvx⇠Cat(1,0.5,0,0.5,1)<latexit sha1_base64="JE1xCnOOnF7uMaIDx9MKXCVrIi8=">AAACGXicbVDLSgMxFM34rPVV69JNsAgVtMwIPpaFblxWsA/olCGTZmwwmRmSO8Uy9BvcufFX3Cgo4lJXfoh7M20Xaj0k5HDOveTe48eCa7DtT2tufmFxaTm3kl9dW9/YLGwVmzpKFGUNGolItX2imeAhawAHwdqxYkT6grX861rmtwZMaR6FlzCMWVeSq5AHnBIwklew3T6BdDDybrCrucSuJNBXMq0RGJUPnQN8aFeOD7BtTvY6+16hZFfsMfAscaakVC265a+nW7fuFd7dXkQTyUKggmjdcewYuilRwKlgo7ybaBYTek2uWMfQkEimu+l4sxHeM0oPB5EyNwQ8Vn92pERqPZS+qcwG13+9TPzP6yQQnHVTHsYJsJBOPgoSgSHCWUy4xxWjIIaGEKq4mRXTPlGEggkzb0Jw/q48S5pHFeekcnzhlKpHaIIc2kG7qIwcdIqq6BzVUQNRdIce0DN6se6tR+vVepuUzlnTnm30C9bHN1uToEU=</latexit>ˆvx⇠Cat(1,0.5,0,0.5,1)<latexit sha1_base64="qh5ugx66tV+HNJcPs/S6O+4zoG0=">AAAB/3icbVDLSsNAFJ34rPUVFQRxEyyCq5IUfGyEggguK5i20IQwmU7aoTOTODOplpiFv+LGhSJu/Q13/Runj4W2HrhwOOde7r0nTCiRyraHxsLi0vLKamGtuL6xubVt7uzWZZwKhF0U01g0QygxJRy7iiiKm4nAkIUUN8Le1chv9LGQJOZ3apBgn8EOJxFBUGkpMPe9LlRZPw8eL20vEhBlLM9kHpglu2yPYc0TZ0pK1QP7+mHo3tcC89trxyhlmCtEoZQtx06Un0GhCKI4L3qpxAlEPdjBLU05ZFj62fj+3DrWStuKYqGLK2us/p7IIJNywELdyaDqyllvJP7ntVIVXfgZ4UmqMEeTRVFKLRVbozCsNhEYKTrQBCJB9K0W6kKdgtKRFXUIzuzL86ReKTtn5dNbp1StgAkK4BAcgRPggHNQBTegBlyAwBN4AW/g3Xg2Xo0P43PSumBMZ/bAHxhfP68NmXU=</latexit>ˆvx=0ms<latexit sha1_base64="qh5ugx66tV+HNJcPs/S6O+4zoG0=">AAAB/3icbVDLSsNAFJ34rPUVFQRxEyyCq5IUfGyEggguK5i20IQwmU7aoTOTODOplpiFv+LGhSJu/Q13/Runj4W2HrhwOOde7r0nTCiRyraHxsLi0vLKamGtuL6xubVt7uzWZZwKhF0U01g0QygxJRy7iiiKm4nAkIUUN8Le1chv9LGQJOZ3apBgn8EOJxFBUGkpMPe9LlRZPw8eL20vEhBlLM9kHpglu2yPYc0TZ0pK1QP7+mHo3tcC89trxyhlmCtEoZQtx06Un0GhCKI4L3qpxAlEPdjBLU05ZFj62fj+3DrWStuKYqGLK2us/p7IIJNywELdyaDqyllvJP7ntVIVXfgZ4UmqMEeTRVFKLRVbozCsNhEYKTrQBCJB9K0W6kKdgtKRFXUIzuzL86ReKTtn5dNbp1StgAkK4BAcgRPggHNQBTegBlyAwBN4AW/g3Xg2Xo0P43PSumBMZ/bAHxhfP68NmXU=</latexit>ˆvx=0msTraining flowTest flowSliding window<latexit sha1_base64="gBMhbgSg+kMClh7GWhjtrQwTr/U=">AAACCHicdVDNTgIxGOyiIuIf6tFLIzHxQDZdEIUbiRcPHiDKTwIb0i0FGro/absasuEFTDx4wUfw6s149egbGF/CR7AsmqjRSZpOvpl22nECzqRC6NVILCwuJZdTK+nVtfWNzczWdkP6oSC0Tnzui5aDJeXMo3XFFKetQFDsOpw2ndHJTG9eUiGZ712ocUBtFw881mcEKz06P+tedTNZZJbL6NAqQ2QWEcqXCpqgQr5ULELLRDGylWTt7WV6e1/tZt47PZ+ELvUU4VjKtoUCZUdYKEY4naQ7oaQBJiM8oG1NPexSaUfxUyedeGuLgWNHcWoujszN8goTuK/9Pdj3hV6egrH5+30RdqUcu452ulgN5W9tNvxLa4eqX7Ij5gWhoh6ZB/VDDpUPZ63AHhOUKD7WBBPB9E8gGWKBidLdpXVFXz3A/0kjb1pHZrFmZSt5MEcK7II9cAAscAwq4BRUQR0QMAA3YArujGvjwXg0nubWhPF5Zgf8gPH8AaKYm10=</latexit>Lw<latexit sha1_base64="gBMhbgSg+kMClh7GWhjtrQwTr/U=">AAACCHicdVDNTgIxGOyiIuIf6tFLIzHxQDZdEIUbiRcPHiDKTwIb0i0FGro/absasuEFTDx4wUfw6s149egbGF/CR7AsmqjRSZpOvpl22nECzqRC6NVILCwuJZdTK+nVtfWNzczWdkP6oSC0Tnzui5aDJeXMo3XFFKetQFDsOpw2ndHJTG9eUiGZ712ocUBtFw881mcEKz06P+tedTNZZJbL6NAqQ2QWEcqXCpqgQr5ULELLRDGylWTt7WV6e1/tZt47PZ+ELvUU4VjKtoUCZUdYKEY4naQ7oaQBJiM8oG1NPexSaUfxUyedeGuLgWNHcWoujszN8goTuK/9Pdj3hV6egrH5+30RdqUcu452ulgN5W9tNvxLa4eqX7Ij5gWhoh6ZB/VDDpUPZ63AHhOUKD7WBBPB9E8gGWKBidLdpXVFXz3A/0kjb1pHZrFmZSt5MEcK7II9cAAscAwq4BRUQR0QMAA3YArujGvjwXg0nubWhPF5Zgf8gPH8AaKYm10=</latexit>LwExample Contact Pattern Templates<latexit sha1_base64="0hIgnQyKqZ0s/tq2p1Ehtx7qYmQ=">AAAB6HicbZDJSgNBEIZr4hbHLerRS2MQPIWZgMtFDHjxmEA2SIbQ06kkbXoWunuEMOQJvHhQxKs+hO/hRXwbO8tBE39o+Pj/Krqq/FhwpR3n28qsrK6tb2Q37a3tnd293P5BXUWJZFhjkYhk06cKBQ+xprkW2Iwl0sAX2PCHN5O8cY9S8Sis6lGMXkD7Ie9xRrWxKtVOLu8UnKnIMrhzyF9/2Ffx+5dd7uQ+292IJQGGmgmqVMt1Yu2lVGrOBI7tdqIwpmxI+9gyGNIAlZdOBx2TE+N0SS+S5oWaTN3fHSkNlBoFvqkMqB6oxWxi/pe1Et279FIexonGkM0+6iWC6IhMtiZdLpFpMTJAmeRmVsIGVFKmzW1scwR3ceVlqBcL7nnhrOLmS0WYKQtHcAyn4MIFlOAWylADBggP8ATP1p31aL1Yr7PSjDXvOYQ/st5+APhAkAA=</latexit>T<latexit sha1_base64="0hIgnQyKqZ0s/tq2p1Ehtx7qYmQ=">AAAB6HicbZDJSgNBEIZr4hbHLerRS2MQPIWZgMtFDHjxmEA2SIbQ06kkbXoWunuEMOQJvHhQxKs+hO/hRXwbO8tBE39o+Pj/Krqq/FhwpR3n28qsrK6tb2Q37a3tnd293P5BXUWJZFhjkYhk06cKBQ+xprkW2Iwl0sAX2PCHN5O8cY9S8Sis6lGMXkD7Ie9xRrWxKtVOLu8UnKnIMrhzyF9/2Ffx+5dd7uQ+292IJQGGmgmqVMt1Yu2lVGrOBI7tdqIwpmxI+9gyGNIAlZdOBx2TE+N0SS+S5oWaTN3fHSkNlBoFvqkMqB6oxWxi/pe1Et279FIexonGkM0+6iWC6IhMtiZdLpFpMTJAmeRmVsIGVFKmzW1scwR3ceVlqBcL7nnhrOLmS0WYKQtHcAyn4MIFlOAWylADBggP8ATP1p31aL1Yr7PSjDXvOYQ/st5+APhAkAA=</latexit>TFL: 1111100000000000000FR: 1111100000000000000RL: 0000111110000000000RR: 0000111110000000000<latexit sha1_base64="FVeVr0+qHOrajdGmUyGFeeQwl+g=">AAAB+XicbVDLSsNAFJ3UV42vqEs3wSK4KknBx6ZYVNCVVjBtoQ1lMp20Q2cmYWZSKKF/4kZEEbcu/A834t84abvQ1gMDh3Pu5Z45QUyJVI7zbeQWFpeWV/Kr5tr6xuaWtb1Tk1EiEPZQRCPRCKDElHDsKaIobsQCQxZQXA/6F5lfH2AhScTv1TDGPoNdTkKCoNJS27Kuyi0GVU+w9PzWu7kcta2CU3TGsOeJOyWFsw+zHD99mdW29dnqRChhmCtEoZRN14mVn0KhCKJ4ZLYSiWOI+rCLm5pyyLD003HykX2glY4dRkI/ruyx+nsjhUzKIQv0ZJZSznqZ+J/XTFR46qeEx4nCHE0OhQm1VWRnNdgdIjBSdKgJRILorDbqQQGR0mWZugR39svzpFYqusfFozu3UCmBCfJgD+yDQ+CCE1AB16AKPIDAADyAZ/BipMaj8Wq8TUZzxnRnF/yB8f4DI/6WTA==</latexit>G= BOUND<latexit sha1_base64="FVeVr0+qHOrajdGmUyGFeeQwl+g=">AAAB+XicbVDLSsNAFJ3UV42vqEs3wSK4KknBx6ZYVNCVVjBtoQ1lMp20Q2cmYWZSKKF/4kZEEbcu/A834t84abvQ1gMDh3Pu5Z45QUyJVI7zbeQWFpeWV/Kr5tr6xuaWtb1Tk1EiEPZQRCPRCKDElHDsKaIobsQCQxZQXA/6F5lfH2AhScTv1TDGPoNdTkKCoNJS27Kuyi0GVU+w9PzWu7kcta2CU3TGsOeJOyWFsw+zHD99mdW29dnqRChhmCtEoZRN14mVn0KhCKJ4ZLYSiWOI+rCLm5pyyLD003HykX2glY4dRkI/ruyx+nsjhUzKIQv0ZJZSznqZ+J/XTFR46qeEx4nCHE0OhQm1VWRnNdgdIjBSdKgJRILorDbqQQGR0mWZugR39svzpFYqusfFozu3UCmBCfJgD+yDQ+CCE1AB16AKPIDAADyAZ/BipMaj8Wq8TUZzxnRnF/yB8f4DI/6WTA==</latexit>G= BOUNDFL: 111111111111111000000FR: 000000111111111111111RL: 000000111111111111111RR: 111111111111111000000<latexit sha1_base64="bbDd2SZbcDuPQ0TMGIeJA8iPeF8=">AAAB+HicbVDLSgMxFM3UVx0fHXXpJlgEV2Wm4GNTLLjQnVX6gnYomTRtQ5PMkGSEOvRL3Agq4taN/+FG/BszbRfaeiBwOOde7skJIkaVdt1vK7O0vLK6ll23Nza3tnPOzm5dhbHEpIZDFspmgBRhVJCappqRZiQJ4gEjjWB4kfqNOyIVDUVVjyLic9QXtEcx0kbqOLnLUpsjPZA8qd5eV8cdJ+8W3AngIvFmJH/+YZeipy+70nE+290Qx5wIjRlSquW5kfYTJDXFjIztdqxIhPAQ9UnLUIE4UX4yCT6Gh0bpwl4ozRMaTtTfGwniSo14YCbTkGreS8X/vFase2d+QkUUayLw9FAvZlCHMG0BdqkkWLORIQhLarJCPEASYW26sk0J3vyXF0m9WPBOCsc3Xr5cBFNkwT44AEfAA6egDK5ABdQABjF4AM/gxbq3Hq1X6206mrFmO3vgD6z3H7KDlhM=</latexit>G=T R O T<latexit sha1_base64="bbDd2SZbcDuPQ0TMGIeJA8iPeF8=">AAAB+HicbVDLSgMxFM3UVx0fHXXpJlgEV2Wm4GNTLLjQnVX6gnYomTRtQ5PMkGSEOvRL3Agq4taN/+FG/BszbRfaeiBwOOde7skJIkaVdt1vK7O0vLK6ll23Nza3tnPOzm5dhbHEpIZDFspmgBRhVJCappqRZiQJ4gEjjWB4kfqNOyIVDUVVjyLic9QXtEcx0kbqOLnLUpsjPZA8qd5eV8cdJ+8W3AngIvFmJH/+YZeipy+70nE+290Qx5wIjRlSquW5kfYTJDXFjIztdqxIhPAQ9UnLUIE4UX4yCT6Gh0bpwl4ozRMaTtTfGwniSo14YCbTkGreS8X/vFase2d+QkUUayLw9FAvZlCHMG0BdqkkWLORIQhLarJCPEASYW26sk0J3vyXF0m9WPBOCsc3Xr5cBFNkwT44AEfAA6egDK5ABdQABjF4AM/gxbq3Hq1X6206mrFmO3vgD6z3H7KDlhM=</latexit>G=T R O TFL: 1111111110000FR: 0000111111111RL: 1111111110000RR: 0000111111111<latexit sha1_base64="yBUBmpE/4gXyjhNr4UL0MEDLtSY=">AAAB+HicbVDLSgMxFM3UVx0fHXXpJlgEV2Wm4GNTrBTRZQX7gHYomTRtQ5PMkGSEOvRL3Agq4taN/+FG/BszbRfaeiBwOOde7skJIkaVdt1vK7O0vLK6ll23Nza3tnPOzm5dhbHEpIZDFspmgBRhVJCappqRZiQJ4gEjjWBYSf3GHZGKhuJWjyLic9QXtEcx0kbqOLmrUpsjPZA8qV5ULscdJ+8W3AngIvFmJH/+YZeipy+72nE+290Qx5wIjRlSquW5kfYTJDXFjIztdqxIhPAQ9UnLUIE4UX4yCT6Gh0bpwl4ozRMaTtTfGwniSo14YCbTkGreS8X/vFase2d+QkUUayLw9FAvZlCHMG0BdqkkWLORIQhLarJCPEASYW26sk0J3vyXF0m9WPBOCsc3Xr5cBFNkwT44AEfAA6egDK5BFdQABjF4AM/gxbq3Hq1X6206mrFmO3vgD6z3H2lZleM=</latexit>G=P A C E<latexit sha1_base64="yBUBmpE/4gXyjhNr4UL0MEDLtSY=">AAAB+HicbVDLSgMxFM3UVx0fHXXpJlgEV2Wm4GNTrBTRZQX7gHYomTRtQ5PMkGSEOvRL3Agq4taN/+FG/BszbRfaeiBwOOde7skJIkaVdt1vK7O0vLK6ll23Nza3tnPOzm5dhbHEpIZDFspmgBRhVJCappqRZiQJ4gEjjWBYSf3GHZGKhuJWjyLic9QXtEcx0kbqOLmrUpsjPZA8qV5ULscdJ+8W3AngIvFmJH/+YZeipy+72nE+290Qx5wIjRlSquW5kfYTJDXFjIztdqxIhPAQ9UnLUIE4UX4yCT6Gh0bpwl4ozRMaTtTfGwniSo14YCbTkGreS8X/vFase2d+QkUUayLw9FAvZlCHMG0BdqkkWLORIQhLarJCPEASYW26sk0J3vyXF0m9WPBOCsc3Xr5cBFNkwT44AEfAA6egDK5BFdQABjF4AM/gxbq3Hq1X6206mrFmO3vgD6z3H2lZleM=</latexit>G=P A C E<latexit sha1_base64="hC6JKuoMiz+aMY+7MhzUtWCdfmI=">AAAB6HicbZDJSgNBEIZ74hbHLerRS2MQPIWZgMtFDHjxmIBZIBlCT6cmadOz0F0jhCFP4MWDIl71IXwPL+Lb2FkOGv2h4eP/q+iq8hMpNDrOl5VbWl5ZXcuv2xubW9s7hd29ho5TxaHOYxmrls80SBFBHQVKaCUKWOhLaPrDq0nevAOlRRzd4CgBL2T9SASCMzRWDbuFolNypqJ/wZ1D8fLdvkjePu1qt/DR6cU8DSFCLpnWbddJ0MuYQsEljO1OqiFhfMj60DYYsRC0l00HHdMj4/RoECvzIqRT92dHxkKtR6FvKkOGA72YTcz/snaKwbmXiShJESI++yhIJcWYTramPaGAoxwZYFwJMyvlA6YYR3Mb2xzBXVz5LzTKJfe0dFJzi5UymSlPDsghOSYuOSMVck2qpE44AXJPHsmTdWs9WM/Wy6w0Z8179skvWa/fKM+QIA==</latexit>t<latexit sha1_base64="hC6JKuoMiz+aMY+7MhzUtWCdfmI=">AAAB6HicbZDJSgNBEIZ74hbHLerRS2MQPIWZgMtFDHjxmIBZIBlCT6cmadOz0F0jhCFP4MWDIl71IXwPL+Lb2FkOGv2h4eP/q+iq8hMpNDrOl5VbWl5ZXcuv2xubW9s7hd29ho5TxaHOYxmrls80SBFBHQVKaCUKWOhLaPrDq0nevAOlRRzd4CgBL2T9SASCMzRWDbuFolNypqJ/wZ1D8fLdvkjePu1qt/DR6cU8DSFCLpnWbddJ0MuYQsEljO1OqiFhfMj60DYYsRC0l00HHdMj4/RoECvzIqRT92dHxkKtR6FvKkOGA72YTcz/snaKwbmXiShJESI++yhIJcWYTramPaGAoxwZYFwJMyvlA6YYR3Mb2xzBXVz5LzTKJfe0dFJzi5UymSlPDsghOSYuOSMVck2qpE44AXJPHsmTdWs9WM/Wy6w0Z8179skvWa/fKM+QIA==</latexit>tFigure2: Overviewoftheproposedapproach. Inadditiontotherobot’sproprioceptivesensorydataand task commands (e.g., following a desired linear velocity ˆEG), the locomotion controller acceptsdesired foot contact patterns as input, and outputs desired joint positions. The foot contact patternsare extracted by a cyclic sliding window of size !Ffrom a pattern template, which is generatedby a random pattern generator during training, and is translated from human commands in naturallanguage by an LLM in tests. We show some examples of contact pattern templates at the bottom.commandsareunstructuredandvague(Section3.1). Intraining,weusearandompatterngeneratortoproducecontactpatterntemplatesthatareofvariouspatternlengths ),foot-groundcontactratioswithinacyclebasedonagivengaittype (Section3.2.2),sothatthelocomotioncontrollergetstolearn on a wide distribution of movements and generalizes better.<General instruction block>You are a dog foot contact pattern expert.Your job is to give a velocity and a foot contact pattern based on the input.You will always give the output in the correct format no matter what the input is.<Gait definition block>The following are description about gaits:1. Trotting is a gait where two diagonally opposite legs strike the ground at the same time.2. Pacing is a gait where the two legs on the left/right side of the body strike the ground at the same time.3. Bounding is a gait where the two front/rear legs strike the ground at the same time. It has a longer suspension phase where all feet are off the ground, for example, for at least 25% of the cycle length. This gait also gives a happy feeling.<Output format definition block>The following are rules for describing the velocity and foot contact patterns:1. You should first output the velocity, then the foot contact pattern.2. There are five velocities to choose from: [-1.0, -0.5, 0.0, 0.5, 1.0].3. A pattern has 4 lines, each of which represents the foot contact pattern of a leg.4. Each line has a label. "FL" is front left leg, "FR" is front right leg, "RL" is rear left leg, and "RR" is rear right leg.5. In each line, "0" represents foot in the air, "1" represents foot on the ground.Continue on the right ...<Examples block>Input: Trot slowlyOutput: 0.5FL: 11111111111111111000000000FR: 00000000011111111111111111RL: 00000000011111111111111111RR: 11111111111111111000000000Input: Bound in placeOutput: 0.0FL: 11111111111100000000000000FR: 11111111111100000000000000RL: 00000011111111111100000000RR: 00000011111111111100000000Input: Pace backward fastOutput: -1.0FL: 11111111100001111111110000FR: 00001111111110000111111111RL: 11111111100001111111110000RR: 00001111111110000111111111Input:Figure3: Ourexactpromptforourmethodinallexperiments. Thefinal“Input:” isfollowedbyuserspecified command. Texts in black are for explanation and are not used as input to the LLM.3.1 Language to Foot Contact PatternsAlthough LLMs can learn knowledge from a vast amount of text data at training, providing properprompts at inference is the key to unlock and direct the acquired knowledge in meaningful ways.Carefully designed prompts serve as the starting point for the models to generate text and guide thedirection and context of the outputs. The proposed approach aims to enable the LLM to map anyhuman commands in natural language to foot contact patterns in a specified format. Figure 3 liststhe prompts used in this paper, wherein we group them into four categories:1.Generalinstruction describesthetasktheLLMshouldaccomplish. Inthispaper,theLLMisexpectedtotranslateanarbitrarycommandtoafootcontactpattern. Notethatexamplesof such translations will be provided in Examples block .42.Gait definition gives basic knowledge of quadrupedal gaits. Although their descriptionsareneitherexhaustivenorsufficientlyaccurate,experimentalresultssuggestthatitprovidesenough information for the LLM to follow the rules. It also connects the bounding gaitto a general impression of emotion. This helps the LLM generalize over vague humancommands that do not explicitly specify what gaits the robot should use.3.Output format definition specifies the format of the output. We discretize the desired ve-locities ˆEG2f1050051g<BsothattheLLMcangiveproperoutputscorrespondingto commands that contain words like “fast(er)” and “slow(er)”.4.Examples block follows the general knowledge of instruction fine-tuning and shows theLLM a few concrete input-output pairs. Although we give the LLM three commonly seengaitexamplesonly,experimentalresultsshowthatitisabletogeneralizeandhandlevariouscommands, including those vaguely state what velocity or gait the robot should use.3.2 Foot Contact Pattern to Low-level Commands3.2.1 Problem FormationWe formulate locomotion control as a Markov Decision Process (MDP) and solve it using DRLalgorithms. An MDP is a tuple 1(A 5% 0Wo, where(is the state space, is the action space,A1BC0CBC ̧1oistherewardfunction, 51BC0Coisthesystemtransitionfunction, %0isthedistributionofinitialstates B0,andW2»011⁄4istherewarddiscountfactor. ThegoalofaDRLalgorithmistoop-timizeapolicy c:(7!sothattheexpectedaccumulatedreward =B0%0»ÍCWCA1BC0CBC ̧1o1⁄4is maximized. Here, 0C=c\1BCoand\is the set of learnable parameters. In locomotion tasks, BCoften includes sensory data and goal conditions (e.g., user specified velocity commands [42]), and0Cis desired joint angles or motor torques. We expand BCto include a desired foot contact pattern,and the controller needs to achieve the main task as well as realize the desired foot contact patterns.3.2.2 Random Pattern GeneratorThe random pattern generator receives a gait type , it then randomly samples a correspondingcycle length )and the ground contact ratio within the cycle for each feet, conducts proper scalingand phase shifts, and finally outputs a pattern template. Due to the space restrictions, we deferthe detailed implementation and illustrations in the Appendix. While humans can give commandsthat map to a much wider set of foot contact pattern templates, we define and train on five types:2fBOUNDTROTPACESTAND_STILL STAND_3LEGSg. Examplesofthefirstthreetypesare illustrated at the bottom of Figure 2, the latter two types are trivial and omitted in the figure.3.2.3 Locomotion ControllerWe use a feed-forward neural network as the control policy c\. It outputs the desired positions foreachmotorjointanditsinputincludesthebase’sangularvelocities,thegravityvector ®6=»0011⁄4inthebase’sframe,userspecifiedvelocity,currentjointpositionsandvelocities,policyoutputfromthe last time step, and desired foot contact patterns. In this paper, we use Unitree A1 [14] as thequadrupedal robot. A1 has 3 joints per leg (i.e., hip, thigh and calf joints) and !F=5in allexperiments, therefore the dimensions of the policy’s input and output are 65and12, respectively.The policy has three hidden layers of sizes »5122561281⁄4with ELU1U=10oat each hidden layeras the non-linear activation function.To encourage natural and symmetric behaviors, we employ a double-pass trick in the control policywhich has been shown to be effective in other scenarios too [43, 44]. Specifically, instead of using0C=c\1BCodirectly as the output, we use 0C=05»c\1BCo ̧5act1c\15obs1BCoo1⁄4, where5act1oand5obs1oflips left-right the policy’s output and the robot’s state respectively. Intuitively, this double-pass trick says the control policy should output consistently when it receives the original and theleft-rightmirroredstates. Inpractice,wefindthistrickgreatlyimprovesthenaturalnessoftherobot’smovement and helped shrink the sim-to-real gap.3.2.4 Task and Training SetupsThe controller’s main task is to follow user specified linear velocities along the robot’s headingdirection,whilekeepingthelinearvelocityalongthelateraldirectionandtheyawangularvelocityasclosetozerosaspossible. Atthesametime,italsoneedstoplanforthecorrecttimingforfeet-ground5strikes so that the realized foot contact patterns match the desired ones. For real world deployment,we add a regularization term that penalizes action changing rate so that the real robot’s movementis smoother. In addition to applying domain randomization, we find that extra reward terms thatkeep the robot base stable can greatly shrink the sim-to-real gap and produce natural looking gaits.Finally, although no heavy engineering is required to train the locomotion policy with extra contactpattern inputs, we find it helps to balance the ratio of the gait types during training. Please refer tothe Appendix for hyper-parameters and detailed settings.4 ExperimentsWe conducted experiments to answer three questions. Throughout the experiments, we used GPT-4[45] as the LLM. Please see the Appendix for experimental setups.4.1 Is Foot Contact Pattern a Good Interface?The first experiment compares foot contact pattern with other possible interface designs. Oneoption is to introduce intermediate parameters as the interface, and have the LLM map from humannaturallanguagetotheparametervalues. Weusetwobaselineapproachesforcomparison: Baseline1 contains a discrete parameter that is the 5 gait types introduced in Section 3.2.2; Baseline2 contains 4 tuples of continuous parameters 1081828o82f1234gthat defines a sinusoidalfunctionH81Co=sin108C ̧18oand its cutoff threshold that defines the foot-ground contact flag forthe8-th foot – FOOT_ON_GROUND = 1fH81Co28g. Here,C2»1)1⁄4is the time step within thecycle. We construct foot contact pattern templates based on the values output by the LLM (e.g., forBaseline1,weusetherandompatterngenerator;forBaseline2,weusethesinusoidalfunctionsandthe cutoff values) and check if they are correct.Figure 4 shows the prompts for the two baselines, where they are based on the prompt in Figure 3with necessary modifications. Table 1 gives the commands we use in this experiment; commands1–20 are basic instructions that express explicitly what the robot should do, whereas commands21–25 test if the interface design allows generalization and pattern composition. We set GPT-4’stemperatureto 05tosamplediverseresponses,andforeachapproachwesubmiteachcommandfivetimes. For each submission, we use the top-1 result only for comparisons.We implement domain knowledge based checker programs for each command for objective eval-uations (see Appendix D), and we summarize the results in Figure 5. Aggregating over all thecommandsandtesttrials,theproposedapproachgetssignificantly( 50%)higheraccuracythanthebaselines(seetheleft-mostplotinthefirstrow). Despiteofhavingonlythreeconventionalexamplesin the context, the LLM almost always maps the human commands correctly to the expected footcontactpatterns. Theonlyexceptioninthetestcommandsiscommand21,wheretheLLMiscorrectonly one out of the five tests. It mostly fails to generate columns of 0s in the pattern template, butBaseline 1<General instruction block>You are a dog foot contact pattern expert.Your job is to give a velocity and the gait type for constructing afoot contact pattern based on the input.You will always give the output in the correct format no matter whatthe input is.<Output format definition block>The following are rules for describing the velocity and parameters:1. You should output the (velocity, gait type) pair.2. There are five velocities to choose from: [-1.0, -0.5, 0.0, 0.5, 1.0].3. There are five gait types to choose from: [STAND_STILL, STAND_3LEGS, BOUND, TROT, PACE].<Examples block>Input: Trot slowlyOutput: (0.5, TROT)Input: Bound in placeOutput: (0.0, BOUND)Input: Pace backward fastOutput: (-1.0, PACE)Input:Baseline 2<General instruction block>You are a dog foot contact pattern expert.Your job is to give a velocity and the parameters for constructing afoot contact pattern based on the input.You will always give the output in the correct format no matter whatthe input is.<Output format definition block>The following are rules for describing the velocity and parameters:1. You should first output the velocity, then the parameters.2. There are five velocities to choose from: [-1.0, -0.5, 0.0, 0.5, 1.0].3. You give the parameters in 4 lines, each of which describes the parameters (a, b, c) for a leg.4. Each line has a label. "FL" is front left leg, "FR" is front right leg, "RL" is rear left leg, and "RR" is rear right leg.5. There are 3 numbers on each line, they form a python function `lambda t: sin(a * t + b) < c`. It represents foot on the ground at t where the function returns True, and foot in air at t where the function returns False. t is the time step in the gait cycle.Continue on the right ...<Examples block>Input: Trot slowlyOutput: 0.5FL: 0.13 10 -0.36FR: 0.13 8.7 -0.36RL: 0.13 8.7 -0.36RR: 0.13 10 -0.36Input: Bound in placeOutput: 0.0FL: 0.2 10 -0.36FR: 0.2 10 -0.36RL: 0.2 8.6 -0.36RR: 0.2 8.6 -0.36Input: Pace backward fastOutput: -1.0FL: 0.5 8.9 0.58FR: 0.5 7 0.58RL: 0.5 8.9 0.58RR: 0.5 7 0.58Input:Figure 4: Baselines prompts. Differences from our prompt are highlighted in blue. The “Gaitdefinition block” is not changed and omitted in the figure. Texts in black are for explanation thusthey are not used as input to the LLM.6Table1: Commandsforgeneratedpatterntemplateevaluation. Weobservedthefootcontactpatternsgenerated by the LLM after accepting the commands, and compared them against our checkers.Id Command1 Stand still2–5 Lift front left / front right / rear left / rear right leg6–8 Bound / Trot / Pace in place9–11 Bound / Trot / Pace forward slowly12–14 Bound / Trot / Pace forward fast15–17 Bound / Trot / Pace backward slowly18–20 Bound / Trot / Pace backward fast21 Trot in place, with a suspension phase where all feet are off the ground22 Trot forward, with the front right leg moving at a higher frequency23 Stand on front right and rear left legs24 Walk with 3 legs, with the rear right foot always in the air25 Bound then pace, you can extend the pattern length if necessaryOverall 1 2 3 4 5 6 7 8 9 10 11 120.00.51.0Ours Baseline1 Baseline213 14 15 16 17 18 19 20 21 22 23 24 25Command Id0.00.51.0AccuracyFigure 5: Accuracy comparison of generated patterns. For each command in Table 1, we generate5 patterns from the LLM and compare them against the expected results. We show the aggregatedaccuracy over all commands on the left of the first row, followed by the individual results.0.00.5Actual vx"Bound forward slowly" to "Raise your front right leg"10"Bound forward slowly" to "Trot backward fast"0.500.751.00"Trot forward slowly" to "Pace forward fast"FLFRRLRRDesiredPatternFLFRRLRRFLFRRLRR0 1 2 3 4Time (s)FLFRRLRRRealizedPattern0 1 2 3 4FLFRRLRR0 1 2 3 4FLFRRLRRFigure 6: Velocity tracking and foot contact pattern realization in simulation. We show the actuallinearvelocityalongtherobot’sheadingdirection(firstrow),thedesiredfootcontactpattern(middlerow) and the realized foot contact pattern (last row) from three test trials. The commands given tothe robot in each trial are shown at the top of the plots.in one interesting case, it appends an extra row of “S: 00 0” to the pattern template, trying toconvince us of the existence of the required suspension phase. Baseline 1 gets the second highestaccuracy;itachieveshighscoresonthebasicinstructionsbutfailscompletelyforcommands21–25.Considering that this is how we sample patterns and train the controller, these results are somewhatexpected. It fails to generate the correct patterns for commands 2–5 because the random patterngenerator selects randomly a foot to lift for =STAND_3LEGS. Although we could have relaxedthe design of Baseline 1 so that it accepted extra parameters for , we didn’t have to do so forthe proposed approach and it still worked out. Moreover, this design modification has very limitedeffect and highlights the restrictions imposed by these high-level APIs. Unlike Baseline 1, Baseline2 should have sufficient freedom in the parameter space to handle all the commands (maybe notcommand25),yetitsoverallaccuracyistheworst. Althoughweperformedpromptengineeringandconstructed the examples carefully in its context for Baseline 2, the LLM has difficulty in under-standing the relationship between gaits and the underlying mathematical reasoning. This limitation7again highlights the motivation and demonstrates the importance of the proposed approach. Theexperimentalresultsindicatethatfootcontactpatternisagoodinterfaceasitisbothstraightforwardand able to provide more flexibility in the human command space.4.2 Can we learn to accomplish the main task and realize the contact pattern?Following[42],wetrainthelocomotioncontrollerwiththeProximalpolicyoptimization(PPO)[46]intheIsaacGymsimulator[47]. Thecontroller’smaintaskistotrackauserspecifiedlinearvelocityalong the robot’s heading direction EG, and at the same time, to place the feet correctly to producethe desired foot contact patterns. Figure 6 shows the results in simulation. The commands givento the robot in each trial are shown at the top of the plots. It can be seen from the figure thatthe controller learns to track the specified velocity (e.g., “slow”/“fast” corresponds to 05<B/10<Bin absolute values) and manages to place the robot’s feet correctly to produce foot contact patternsthat are close to the desired ones. Furthermore, we successfully transfer the learned controller anddeployitonthephysicalrobotwithoutanyfine-tuning. Figure1givessomeanalyticalresultsonthephysical robot. Please watch the accompanying video for the motions.4.3 Does the proposed approach work with unstructured/vague instructions?The proposed approach enables both the quadrupedal robot to follow direct and precise commandsandunstructuredandvagueinstructionsinnaturallanguagethatfacilitateshumanrobotinteractions.To demonstrate this, we sent commands in Table 2 to the robot and observe its reactions. Note thatunlike in the previous tests, none of the human expressions here stated explicitly what the robotshouldhavedoneorwhatgaititshouldhaveused. Basedonthesubjectiveevaluation,theobservedmotions were capable of expressing the desired emotion (e.g., jumping up and down when excited)andpresentingthesceneaccurately(e.g,strugglingtomovewhenwetoldthatithadalimpingleg),thereactionsweremostlyconsistentwiththeexpectations. Thiswillunlockmanyrobotapplications,ranging from scene acting and human companion to more creative tasks in industries and homes.Table2: Extendedtests. Thecommandsinthistestdonottelltherobotexplicitlywhatitshoulddo.Command Observed Robot MotionGood news, we are going to a picnic! Jumping up and downBack off, don’t hurt that squirrel! Moving backward slowly in trotting gaitsAct as if the ground is very hot Pacing fast, with its feet barely touching the groundAct as if you have a limping rear left leg Struggling to walk, with its RL leg hardly movingGo catch that squirrel on the tree Bounding fast forward toward the prey5 ConclusionsThis paper devised an interactive system for quadrupedal robots that allowed users to craft diverselocomotionbehavioursflexibly. Thecoreideasoftheproposedapproachincludeintroducingdesiredfootcontactpatternsasanewinterfacebetweennaturallanguageandthelow-levelcontroller. Duringtraining, these contact patterns are generated by a random generator, and a DRL based method iscapable of accomplishing the main task and realizing the desired patterns at the same time. Intests, the contact patterns are translated from human commands in natural language. We show thathavingcontactpatternsastheinterfaceismorestraightforwardandflexiblethanotherdesignchoices.Moreover, the robot is able to follow both direct instructions and commands that do not explicitlystate how the robot should react in both simulation and on physical robots.Limitations and Future WorkOne limitation of the proposed approach is that domain knowledge and trial-and-error tests arenecessarytodesigntherandompatterngenerator,suchthatthepatternsusedfortrainingarefeasible.Furthermore, while increasing the variety of the random patterns would essentially increase thelocomotion capability of the robot, training on a large set of gaits is hard since it involves thetrade-off of sample balancing and data efficiency.Onemaytrainasetofexpertpoliciesseparately,whereeachofwhichspecializesinonemotion,thenuseimitationlearningtodistilltheexpertstoaddressthisproblem. Anotherinterestingdirectionforfutureworksistomodifythecurrentpatternrepresentationandmakeitmoreversatile(e.g.,replacing0sand1swith0sand ’stospecifydesiredfootclearance ),alternativelymethodsin[48,49]canalso be incorporated to achieve the same effect.8AcknowledgmentsThe authors would like to thank Tingnan Zhang, Linda Luu, Kuang-Huei Lee, Vincent Vanhouckeand Douglas Eck for their valuable discussions and technical support in the experiments.The experiments in this work were performed on GPU virtual machines provided by Google CloudPlatform.This work was partially supported by JST Moonshot R&D Grant Number JPMJPS2011, CRESTGrantNumberJPMJCR2015andBasicResearchGrant(SuperAI)ofInstituteforAIandBeyondofthe University of Tokyo.9References[1] J. Borenstein and I. Ulrich. The guidecane-a computerized travel aid for the active guidanceofblindpedestrians. In Proceedings of International Conference on Robotics and Automation ,volume 2, pages 1283–1288. IEEE, 1997.[2] T.-K. Chuang, N.-C. Lin, J.-S. Chen, C.-H. Hung, Y.-W. Huang, C. Teng, H. Huang, L.-F. Yu,L.Giarré,andH.-C.Wang. Deeptrail-followingroboticguidedoginpedestrianenvironmentsfor people who are blind and visually impaired-learning from virtual and real worlds. In 2018IEEE International Conference on Robotics and Automation (ICRA) ,pages5849–5855.IEEE,2018.[3] K.Mehrizi. Quadrupedalroboticguidedogwithvocalhuman-robotinteraction. arXiv preprintarXiv:2111.03718 , 2021.[4] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal,K.Slama,A.Ray,J.Schulman,J.Hilton,F.Kelton,L.Miller,M.Simens,A.Askell,P.Welinder,P. F. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions withhuman feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh,editors, Advances in Neural Information Processing Systems , volume 35, pages 27730–27744.Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf .[5] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda,N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXivpreprint arXiv:2107.03374 , 2021.[6] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W.Chung,C.Sutton,S.Gehrmann,etal. Palm: Scalinglanguagemodelingwithpathways. arXivpreprint arXiv:2204.02311 , 2022.[7] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thoughtpromptingelicitsreasoninginlargelanguagemodels. arXiv preprint arXiv:2201.11903 ,2022.[8] M.Ahn,A.Brohan,N.Brown,Y.Chebotar,O.Cortes,B.David,C.Finn,K.Gopalakrishnan,K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in roboticaffordances. arXiv preprint arXiv:2204.01691 , 2022.[9] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch,Y. Chebotar, et al. Inner monologue: Embodied reasoning through planning with languagemodels. arXiv preprint arXiv:2207.05608 , 2022.[10] C.Lynch,A.Wahid,J.Tompson,T.Ding,J.Betker,R.Baruch,T.Armstrong,andP.Florence.Interactive language: Talking to robots in real time. arXiv preprint arXiv:2210.06407 , 2022.[11] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code aspolicies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753 ,2022.[12] I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, andA.Garg. Progprompt: Generatingsituatedrobottaskplansusinglargelanguagemodels. arXivpreprint arXiv:2209.11302 , 2022.[13] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor. Chatgpt for robotics: Design principlesand model abilities. Microsoft Autonomous Systems and Robotics Research , 2023.[14] Unitree Robotics, 2023. https://unitreerobotics.net/ .[15] S. Tellex, N. Gopalan, H. Kress-Gazit, and C. Matuszek. Robots that use language. AnnualReview of Control, Robotics, and Autonomous Systems , 3:25–55, 2020.[16] A.Zeng,A.Wong,S.Welker,K.Choromanski,F.Tombari,A.Purohit,M.Ryoo,V.Sindhwani,J.Lee,V.Vanhoucke,etal. Socraticmodels: Composingzero-shotmultimodalreasoningwithlanguage. arXiv preprint arXiv:2204.00598 , 2022.10[17] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas. Translating structured english to robotcontrollers. Advanced Robotics , 22(12):1343–1359, 2008.[18] S. Stepputtis, J. Campbell, M. Phielipp, S. Lee, C. Baral, and H. Ben Amor. Language-conditioned imitation learning for robot manipulation tasks. Advances in Neural InformationProcessing Systems , 33:13139–13150, 2020.[19] J. Y. Chai, Q. Gao, L. She, S. Yang, S. Saba-Sadiya, and G. Xu. Language to action: Towardsinteractive task learning with physical agents. In IJCAI, pages 2–9, 2018.[20] T. M. Howard, S. Tellex, and N. Roy. A natural language planner interface for mobile manip-ulators. In 2014 IEEE International Conference on Robotics and Automation (ICRA) , pages6652–6659. IEEE, 2014.[21] A.Brohan,N.Brown,J.Carbajal,Y.Chebotar,J.Dabis,C.Finn,K.Gopalakrishnan,K.Haus-man,A.Herzog,J.Hsu,etal. Rt-1: Roboticstransformerforreal-worldcontrolatscale. arXivpreprint arXiv:2212.06817 , 2022.[22] O.Mees,L.Hermann,andW.Burgard. Whatmattersinlanguageconditionedroboticimitationlearning over unstructured data. IEEE Robotics and Automation Letters , 7(4):11205–11212,2022.[23] M.Shridhar,L.Manuelli,andD.Fox. Cliport: Whatandwherepathwaysforroboticmanipu-lation. In Conference on Robot Learning , pages 894–906. PMLR, 2022.[24] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg. Text2motion: From natural languageinstructions to feasible plans. arXiv preprint arXiv:2303.12153 , 2023.[25] A. Bucker, L. Figueredo, S. Haddadin, A. Kapoor, S. Ma, and R. Bonatti. Latte: Languagetrajectory transformer. arXiv preprint arXiv:2208.02918 , 2022.[26] J. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim. Dynamic locomotion in the mitcheetah3throughconvexmodel-predictivecontrol.In 2018 IEEE/RSJ international conferenceon intelligent robots and systems (IROS) , pages 1–9. IEEE, 2018.[27] R. Grandia, F. Farshidian, R. Ranftl, and M. Hutter. Feedback MPC for torque-controlledlegged robots. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), pages 4730–4737. IEEE, 2019.[28] H. Li, T. Zhang, W. Yu, and P. M. Wensing. Versatile real-time motion synthesis via kino-dynamic mpc with hybrid-systems ddp. arXiv preprint arXiv:2209.14138 , 2022.[29] O.Villarreal,V.Barasuol,P.M.Wensing,D.G.Caldwell,andC.Semini. Mpc-basedcontrollerwith terrain insight for dynamic legged locomotion. In 2020 IEEE International Conferenceon Robotics and Automation (ICRA) , pages 2436–2442. IEEE, 2020.[30] A. W. Winkler, D. C. Bellicoso, M. Hutter, and J. Buchli. Gait and trajectory optimizationfor legged systems through phase-based end-effector parameterization. IEEE Robotics andAutomation Letters (RA-L) , 3:1560–1567, July 2018. doi:10.1109/LRA.2018.2798285.[31] M.H.Raibert. Trotting,pacingandboundingbyaquadrupedrobot. Journal of biomechanics ,23:79–98, 1990.[32] P.Eckert,A.Spröwitz,H.Witte,andA.J.Ijspeert. Comparingtheeffectofdifferentspineandleg designs for a small bounding quadruped robot. In 2015 IEEE International Conference onRobotics and Automation (ICRA) , pages 3128–3133. IEEE, 2015.[33] D. W. Marhefka, D. E. Orin, J. P. Schmiedeler, and K. J. Waldron. Intelligent control ofquadruped gallops. IEEE/ASME Transactions On Mechatronics , 8(4):446–456, 2003.[34] Y. Yang, K. Caluwaerts, A. Iscen, T. Zhang, J. Tan, and V. Sindhwani. Data efficient rein-forcement learning for legged robots. In Conference on Robot Learning , pages 1–10. PMLR,2020.11[35] G.B.Margolis,T.Chen,K.Paigwar,X.Fu,D.Kim,S.Kim,andP.Agrawal. Learningtojumpfrom pixels. arXiv preprint arXiv:2110.15344 , 2021.[36] X.Da,Z.Xie,D.Hoeller,B.Boots,A.Anandkumar,Y.Zhu,B.Babich,andA.Garg. Learninga contact-adaptive controller for robust, efficient legged locomotion. In Conference on RobotLearning, pages 883–894. PMLR, 2021.[37] J. Siekmann, Y. Godse, A. Fern, and J. Hurst. Sim-to-real learning of all common bipedalgaits via periodic reward composition. In 2021 IEEE International Conference on Roboticsand Automation (ICRA) , pages 7309–7315. IEEE, 2021.[38] A. Iscen, K. Caluwaerts, J. Tan, T. Zhang, E. Coumans, V. Sindhwani, and V. Vanhoucke.Policies modulating trajectory generators. In Conference on Robot Learning , pages 916–926.PMLR, 2018.[39] J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter.Learningagileanddynamicmotorskillsforleggedrobots. Science Robotics ,4(26):eaau5872,2019.[40] K.Caluwaerts,A.Iscen,J.C.Kew,W.Yu,T.Zhang,D.Freeman,K.-H.Lee,L.Lee,S.Saliceti,V. Zhuang, et al. Barkour: Benchmarking animal-level agility with quadruped robots. arXivpreprint arXiv:2305.14654 , 2023.[41] G.B.MargolisandP.Agrawal. Walktheseways: Tuningrobotcontrolforgeneralizationwithmultiplicity of behavior. In Conference on Robot Learning , pages 22–31. PMLR, 2023.[42] N. Rudin, D. Hoeller, P. Reist, and M. Hutter. Learning to walk in minutes using massivelyparalleldeepreinforcementlearning. In Conference on Robot Learning ,pages91–100.PMLR,2022.[43] Y. Tang, J. Tan, and T. Harada. Learning agile locomotion via adversarial training. In 2020IEEE/RSJ International Conference On Intelligent Robots And Systems (IROS) , pages 6098–6105. IEEE, 2020.[44] H. Yazied, S. Ariana, Villegas, C. Evan, C., F. Aleksandra, and T. Lydia. Enhancing valueestimation policies by post-hoc symmetry exploitation in motion planning tasks. In 2023IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . IEEE, 2023.[45] OpenAI. GPT-4 technical report. arXiv, 2023.[46] J.Schulman,F.Wolski,P.Dhariwal,A.Radford,andO.Klimov. Proximalpolicyoptimizationalgorithms. arXiv preprint arXiv:1707.06347 , 2017.[47] V.Makoviychuk,L.Wawrzyniak,Y.Guo,M.Lu,K.Storey,M.Macklin,D.Hoeller,N.Rudin,A.Allshire,A.Handa,etal. IsaacGym: HighperformanceGPU-basedphysicssimulationforrobot learning. arXiv preprint arXiv:2108.10470 , 2021.[48] A. Raffin, D. Seidel, J. Kober, A. Albu-Schäffer, J. Silvério, and F. Stulp. Learning to exploitelastic actuators for quadruped locomotion. arXiv preprint arXiv:2209.07171 , 2022.[49] G. Bellegarda and A. Ijspeert. Cpg-rl: Learning central pattern generators for quadrupedlocomotion. IEEE Robotics and Automation Letters , 7(4):12547–12554, 2022.12A More about Random Pattern GeneratorGiven a specified gait , there are 4 steps for the random pattern generator to create a template,and Figure 7 illustrates the process when =PACING. To acquire knowledge as to what ranges ofsettings are feasible for the robot, we first train the robot in simulation for simple locomotion tasks(i.e., follow desired linear velocities). We then analyze the learned gait (the agents seem to learnexclusively trotting, probably because the reward design in those libraries) and measure the ranges.To generate a template in general, we•[Step 1] Sample template length ). In our implementation, )2»24281⁄4, since the controlfrequency is 50Hz, this corresponds to a cycle length of 048056seconds.•[Step 2] Sample a foot-ground contact length ratio within the cycle Acontact2»05071⁄4.)Acontacttherefore gives the number of ‘1’s and )11Acontactothe number of ‘0’s in eachrow.•[Step 3] Scale cycle length and ground contact ratio. This only applies to 2fBOUNDPACEgbecause these two gaits require shorter foot contact to make it natu-ral and dynamically more feasible. For =BOUND, we shorten the foot-ground contacttimeto 60%ofthesampledvalue(i.e., Acontact =06Acontact);For=PACE,wekeepAcontactuntouched, but shrink the cycle length to half its sampled value (i.e., )=05)).•[Step 4] Shift patterns for corresponding legs. This step requires domain knowledge ofquadrupedal locomotion and is gait type dependent. For example, for =BOUND, weplace the ones at the beginning of the FL and FR rows and shift those in the RL and RRrows by 05)Acontactbits to the right; For =PACE, we place the ones at the beginning inthe FL and RL rows and at the end of the FR and RR rows.<latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>T<latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>TFLFRRLRRStep 1: Sample template length <latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>T<latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>TStep 2: Sample contact ratio <latexit sha1_base64="tdhlaczG82s+8iMFemKaWn6xKmY=">AAAB+3icbVDLSgMxFM1YH7W+xoorN8EiuCozgo9lwY3LCvYBbRkyaaYNzWNIMmIZ5lfcuFDErR/gL7gQXPkpmmm70NYDgcM593JPThgzqo3nfTpLheWV1bXiemljc2t7x90tN7VMFCYNLJlU7RBpwqggDUMNI+1YEcRDRlrh6DL3W7dEaSrFjRnHpMfRQNCIYmSsFLhlFXQ5MkPFUyyFQdhkgVvxqt4EcJH4M1KpFT6+3/a/SD1w37t9iRNOhMEMad3xvdj0UqQMxYxkpW6iSYzwCA1Ix1KBONG9dJI9g0dW6cNIKvuEgRP190aKuNZjHtrJPKee93LxP6+TmOiil1IRJ4YIPD0UJQwaCfMiYJ8qgg0bW4KwojYrxEOkbAO2rpItwZ//8iJpnlT9s+rptV+peWCKIjgAh+AY+OAc1MAVqIMGwOAO3INH8ORkzoPz7LxMR5ec2c4e+APn9QeDb5kg</latexit>rcontact<latexit sha1_base64="tdhlaczG82s+8iMFemKaWn6xKmY=">AAAB+3icbVDLSgMxFM1YH7W+xoorN8EiuCozgo9lwY3LCvYBbRkyaaYNzWNIMmIZ5lfcuFDErR/gL7gQXPkpmmm70NYDgcM593JPThgzqo3nfTpLheWV1bXiemljc2t7x90tN7VMFCYNLJlU7RBpwqggDUMNI+1YEcRDRlrh6DL3W7dEaSrFjRnHpMfRQNCIYmSsFLhlFXQ5MkPFUyyFQdhkgVvxqt4EcJH4M1KpFT6+3/a/SD1w37t9iRNOhMEMad3xvdj0UqQMxYxkpW6iSYzwCA1Ix1KBONG9dJI9g0dW6cNIKvuEgRP190aKuNZjHtrJPKee93LxP6+TmOiil1IRJ4YIPD0UJQwaCfMiYJ8qgg0bW4KwojYrxEOkbAO2rpItwZ//8iJpnlT9s+rptV+peWCKIjgAh+AY+OAc1MAVqIMGwOAO3INH8ORkzoPz7LxMR5ec2c4e+APn9QeDb5kg</latexit>rcontact<latexit sha1_base64="D+Ltlul0/rKSRmO3BTet7o6yWnk=">AAAB/HicbVDLSgMxFM20Pmp9jRZXboJFcFVmBB/LghuXFfqCdhgyaaYNTTJDkhGGof6KGxeKuHXvL7gQXPkpmmm70NYDgcM593JPThAzqrTjfFqF4srq2nppo7y5tb2za+/tt1WUSExaOGKR7AZIEUYFaWmqGenGkiAeMNIJxle537klUtFINHUaE4+joaAhxUgbybcrTen3OdIjyTMcCY2wnvh21ak5U8Bl4s5JtV78+H47+CIN337vDyKccCI0ZkipnuvE2suQ1BQzMin3E0VihMdoSHqGCsSJ8rJp+Ak8NsoAhpE0T2g4VX9vZIgrlfLATOY51aKXi/95vUSHl15GRZxoIvDsUJgwqCOYNwEHVBKsWWoIwpKarBCPkDQNmL7KpgR38cvLpH1ac89rZzdute6AGUrgEByBE+CCC1AH16ABWgCDFNyDR/Bk3VkP1rP1MhstWPOdCvgD6/UHLduZfg==</latexit>Trcontact<latexit sha1_base64="D+Ltlul0/rKSRmO3BTet7o6yWnk=">AAAB/HicbVDLSgMxFM20Pmp9jRZXboJFcFVmBB/LghuXFfqCdhgyaaYNTTJDkhGGof6KGxeKuHXvL7gQXPkpmmm70NYDgcM593JPThAzqrTjfFqF4srq2nppo7y5tb2za+/tt1WUSExaOGKR7AZIEUYFaWmqGenGkiAeMNIJxle537klUtFINHUaE4+joaAhxUgbybcrTen3OdIjyTMcCY2wnvh21ak5U8Bl4s5JtV78+H47+CIN337vDyKccCI0ZkipnuvE2suQ1BQzMin3E0VihMdoSHqGCsSJ8rJp+Ak8NsoAhpE0T2g4VX9vZIgrlfLATOY51aKXi/95vUSHl15GRZxoIvDsUJgwqCOYNwEHVBKsWWoIwpKarBCPkDQNmL7KpgR38cvLpH1ac89rZzdute6AGUrgEByBE+CCC1AH16ABWgCDFNyDR/Bk3VkP1rP1MhstWPOdCvgD6/UHLduZfg==</latexit>Trcontact‘1’s‘0’sStep 3: Scale <latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>T<latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>T or <latexit sha1_base64="tdhlaczG82s+8iMFemKaWn6xKmY=">AAAB+3icbVDLSgMxFM1YH7W+xoorN8EiuCozgo9lwY3LCvYBbRkyaaYNzWNIMmIZ5lfcuFDErR/gL7gQXPkpmmm70NYDgcM593JPThgzqo3nfTpLheWV1bXiemljc2t7x90tN7VMFCYNLJlU7RBpwqggDUMNI+1YEcRDRlrh6DL3W7dEaSrFjRnHpMfRQNCIYmSsFLhlFXQ5MkPFUyyFQdhkgVvxqt4EcJH4M1KpFT6+3/a/SD1w37t9iRNOhMEMad3xvdj0UqQMxYxkpW6iSYzwCA1Ix1KBONG9dJI9g0dW6cNIKvuEgRP190aKuNZjHtrJPKee93LxP6+TmOiil1IRJ4YIPD0UJQwaCfMiYJ8qgg0bW4KwojYrxEOkbAO2rpItwZ//8iJpnlT9s+rptV+peWCKIjgAh+AY+OAc1MAVqIMGwOAO3INH8ORkzoPz7LxMR5ec2c4e+APn9QeDb5kg</latexit>rcontact<latexit sha1_base64="tdhlaczG82s+8iMFemKaWn6xKmY=">AAAB+3icbVDLSgMxFM1YH7W+xoorN8EiuCozgo9lwY3LCvYBbRkyaaYNzWNIMmIZ5lfcuFDErR/gL7gQXPkpmmm70NYDgcM593JPThgzqo3nfTpLheWV1bXiemljc2t7x90tN7VMFCYNLJlU7RBpwqggDUMNI+1YEcRDRlrh6DL3W7dEaSrFjRnHpMfRQNCIYmSsFLhlFXQ5MkPFUyyFQdhkgVvxqt4EcJH4M1KpFT6+3/a/SD1w37t9iRNOhMEMad3xvdj0UqQMxYxkpW6iSYzwCA1Ix1KBONG9dJI9g0dW6cNIKvuEgRP190aKuNZjHtrJPKee93LxP6+TmOiil1IRJ4YIPD0UJQwaCfMiYJ8qgg0bW4KwojYrxEOkbAO2rpItwZ//8iJpnlT9s+rptV+peWCKIjgAh+AY+OAc1MAVqIMGwOAO3INH8ORkzoPz7LxMR5ec2c4e+APn9QeDb5kg</latexit>rcontact if necessaryScaled <latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>T<latexit sha1_base64="PkVisEX4bL9EeqvEdAe+AFIsrrc=">AAAB6HicbZDJSgNBEIZr4hbjFpebl8YgeAozgsvNgAc9JpANkiH0dGqSNj0L3T1CHPIEXjwo4tUH8OSTePPom9hZDhr9oeHj/6voqvJiwZW27U8rs7C4tLySXc2trW9sbuW3d+oqSiTDGotEJJseVSh4iDXNtcBmLJEGnsCGN7gc541blIpHYVUPY3QD2gu5zxnVxqpUO/mCXbQnIn/BmUHh4v3u6+ptLy138h/tbsSSAEPNBFWq5dixdlMqNWcCR7l2ojCmbEB72DIY0gCVm04GHZFD43SJH0nzQk0m7s+OlAZKDQPPVAZU99V8Njb/y1qJ9s/dlIdxojFk04/8RBAdkfHWpMslMi2GBiiT3MxKWJ9KyrS5Tc4cwZlf+S/Uj4vOafGk4hRKNkyVhX04gCNw4AxKcA1lqAEDhHt4hCfrxnqwnq2XaWnGmvXswi9Zr9/RsJCf</latexit>TStep 4: Shifts<latexit sha1_base64="zqE22OeSHl+17jGOGxRyMpYTs3Y=">AAAB+nicbVDJSgNBEO2JW4zbRL15GQyCpzAjuFzESA7Ri0QwCyRD6Ol0kiY9PUN3jRrHfIoXD4oInvIl3jz6J3aWgyY+KHi8V0VVPS/kTIFtfxmJufmFxaXkcmpldW19w0xvllUQSUJLJOCBrHpYUc4ELQEDTquhpNj3OK143fzQr9xSqVggbqAXUtfHbcFajGDQUsNMF07rQO9Bkbh4nr+8KvQbZsbO2iNYs8SZkMzZ4OG78LEdFxvmZ70ZkMinAgjHStUcOwQ3xhIY4bSfqkeKhph0cZvWNBXYp8qNR6f3rT2tNK1WIHUJsEbq74kY+0r1fE93+hg6atobiv95tQhaJ27MRBgBFWS8qBVxCwJrmIPVZJIS4D1NMJFM32qRDpaYgE4rpUNwpl+eJeWDrHOUPbx2MjkbjZFEO2gX7SMHHaMcukBFVEIE3aEn9IJejUfj2Xgz3setCWMys4X+wBj8AJwVl0E=</latexit>G=PACING<latexit sha1_base64="zqE22OeSHl+17jGOGxRyMpYTs3Y=">AAAB+nicbVDJSgNBEO2JW4zbRL15GQyCpzAjuFzESA7Ri0QwCyRD6Ol0kiY9PUN3jRrHfIoXD4oInvIl3jz6J3aWgyY+KHi8V0VVPS/kTIFtfxmJufmFxaXkcmpldW19w0xvllUQSUJLJOCBrHpYUc4ELQEDTquhpNj3OK143fzQr9xSqVggbqAXUtfHbcFajGDQUsNMF07rQO9Bkbh4nr+8KvQbZsbO2iNYs8SZkMzZ4OG78LEdFxvmZ70ZkMinAgjHStUcOwQ3xhIY4bSfqkeKhph0cZvWNBXYp8qNR6f3rT2tNK1WIHUJsEbq74kY+0r1fE93+hg6atobiv95tQhaJ27MRBgBFWS8qBVxCwJrmIPVZJIS4D1NMJFM32qRDpaYgE4rpUNwpl+eJeWDrHOUPbx2MjkbjZFEO2gX7SMHHaMcukBFVEIE3aEn9IJejUfj2Xgz3setCWMys4X+wBj8AJwVl0E=</latexit>G=PACINGFigure 7: How the random pattern generator works.B Reward DesignOur reward design is based on those in legged gym [42]. The total reward consists of 8 weightedreward terms: =Í88=1F8A8, whereF8’s are the weights and A8’s are the rewards. The definitionof each reward term and the value of the weights are in the following. We put the purpose of eachreward term in the bracket at the beginning of the description.•[TaskReward]Linearvelocitytrackingreward. A1=4411EGˆEGo2 ̧E2Ho,whereEGandˆEGarethe current and desired linear velocities along the robot’s heading direction, and EHis thecurrent linear velocity along the lateral direction. All velocities are in the base frame, andF1=1.•[Task Reward] Angular velocity tracking reward. A2=44l2I, wherelIis the currentangular yaw velocity in the base frame and F2=05.•[Task Reward] Penalty on foot contact pattern violation. A3=14Í48=1j28ˆ28j, where28ˆ282f01gare the realized and desired foot-ground contact indicators for the 8-th foot,andF3=1.•[Sim-to-Real] Regularization on action rate. A4=Í128=110C0C1o2where0Cand0C1arethe controller’s output at the current and the previous time steps, and F4=0005.•[Sim-to-Real] Penalty on roll and pitch angular velocities. We encourage the robot’s baseto be stable during motion and hence A5=l2G ̧l2H, wherelGandlHare the current roll13andpitchangularvelocitiesinthebaseframe. Thispenaltydoesnotapplyto =BOUNDandF5=005.•[Sim-to-Real] Penalty on linear velocity along the z-axis. Similar to the previous term,we use this term to encourage the base stability during motion. A6=E2IwhereEIis thecurrent linear velocity along the z-axis in the base frame. This penalty does not apply to=BOUND either andF6=2.•[NaturalMotion]Penaltyonbodycollision. A7=Í 8=1 1f8¡01g,where8isthecontactforce on the8-th body. In our experiments =8(i.e., 4 thighs and 4 calves) and F7=1.•[Natural Motion] Penalty on deviation from the default pose. A8=Í0C2hipj0Cj, where0C’sare the actions (i.e., deviation from the default joint position) applied to the hip joints, andF8=003.C Training ConfigurationsC.1 ControlWeusePDcontroltoconvertpositionstotorquesinoursystem. Thebasesvalueforthe2gainsare:?=20and:3=05. Our control frequency is 50Hz.C.2 Gait SamplingWe randomly assign a gait to a robot at environment resets, and also samples it again every 150stepsinsimulation. Ofthe5 ’s,somegaitsarehardertolearnthanothers. Toavoidthecasewherethe hard-to-learn gaits die out, leaving the controller to learn only on the easier gaits, we restrict thesampling distribution such that the ratio of the 5 ’s are always approximately the same.C.3 Reinforcement LearningWeusetheProximalpolicyoptimization(PPO)[46]algorithmasourreinforcementlearningmethodto train the controller. In our experiments, PPO trains an actor-critic policy. The architecture of theactor is introduced in Section 3.2.3, and the critic has the identical network architecture except that(1)itsoutputsizeis1insteadof12,and(2)italsoreceivesthebasevelocitiesinthelocalframeasitsinput. We keep all the hyper-parameters the same as in [42] and train for 1000 iterations. For safetyreasons, we end an episode early if the body height of the robot is lower than 0.25 meters. Trainingcan be done on a singe NVIDIA V100 GPU in approximately 15 minutes.C.4 Domain RandomizationDuring training, we sample noises nUnif, and add them to the controller’s observations. Weuse PD control to convert positions to torques in our system, and domain randomization is alsoappliedtothe2gains :?and:3. Table3givesthecomponentswherenoises nwereaddedandtheircorresponding ranges.Table 3: Domain randomization settings.# Component Noise Range1 Base linear velocities »221⁄42 Base angular velocities »0250251⁄43 Gravity vector in the base frame »111⁄44 Joint positions »111⁄45 Joint velocities »0050051⁄46:? »501⁄47:3 »00251⁄414D Objective Evaluation on Generated PatternsWe implemented a domain knowledge based check program for each of the commands in Table 1,andevaluatedthegeneratedpatternswiththesecheckerstoproduceFigure5. Bydomainknowledge,we mean knowledge about quadrupedal locomotion as to what each gait pattern should look like(e.g., the robot should move its diagonal legs together when trotting, while in pacing gait the robotshould move legs on the left/right side of the body together, etc).15 |
nB9zUwS2gpI | Under review at the GSK.ai CausalBench challenge (ICLR 2023)A S UPERVISED LIGHT GBM-B ASED APPROACH TOTHE GSK. AICAUSAL BENCH CHALLENGE (ICLR2023)TEAM GUANLAB REPORT SUBMISSIONSAnonymous authorsPaper under double-blind reviewABSTRACTIn this challenge, we transformed the task of detecting gene pairs with causalrelationships into a supervised learning problem. We constructed a dataset forall gene pairs, with initial labels determined by gene expression correlations. ALightGBM model was trained and applied to the same data for prediction. The top1001 pairs with the highest prediction scores were selected. In local experiments,this solution achieved a 0.3779 AUC score in the RPE1 data and a 0.3265 score inthe K562 data.1 N OTATIONSIn addition to standard notations, we defined several custom notations listed below to describe themethod more efficiently.⟨gi,gj⟩ A directed gene pair from gitogjMgi,gj Select the rows for giand the columns for gjfrom the ex-pression matrix MμM,0 The column-wise mean value of the expression matrixσM,0 The column-wise standard deviation of the expression ma-trix2 M ETHODS2.1 C ALCULATE THE CORRELATIONSWe calculated correlations for all possible gene pairs ⟨gi,gj⟩, where giandgjbelonged to thecolumns of the expression matrix Mk×landi̸=j. The input expression data were the concate-nation of the interventional data ( Mgi,gi,Mgi,gj) and the samples from the observational data(Mnon−targeting, gi,Mnon−targeting, gj). The observational data samples had the same lengths asthe interventional data. If girelated cells were not present in the expression matrix due to partialselection, the input data would be Mnon−targeting, giandMnon−targeting, gj. The resulting corre-lation matrix was asymmetric and had the shape of (l, l).2.2 C ONSTRUCT THE DATASETThe initial labels of gene pairs were determined using a correlation threshold T. Pairs with corre-lation scores higher than 0.1 were labeled as positive samples. To generate the features, we firstnormalized the expression matrix using (M−μM,0)/σM,0. For each gene pair ⟨gi,gj⟩, we ex-tracted four features from the matrix: Mnon−targeting, gi,Mnon−targeting, gj(average observationalexpression of giandgj), andMgi,gi,Mgi,gj(average intervened expression by gi). Ifgirelated1Under review at the GSK.ai CausalBench challenge (ICLR 2023)Table 1: LightGBM hyper-parametersParameter Valueboosting type gbdtobjective binarymetric binary loglossnum leaves 5max depth 2min data inleaf 5learning rate 0.05min gain tosplit 0.01num iterations 1000cells were missing in the expression matrix, the last two features would be 0 and NaN. The outputdataset would have l×(l−1)rows and 5 columns.2.3 T RAIN THE MODEL AND PREDICTThe LightGBM model was set up using the hyperparameters listed in Table 1 and trained on theentire dataset. Predictions were from applying the model to the same data used for training. Weselected the top 1001 gene pairs with the highest prediction scores as our final outputs.3 E XPERIMENTSTo determine the details of training parameters, including methods for initializing positive samples(KandT), the number of negative samples ( R), the number of output gene pairs ( N), normalizationmethods, and ensembles, we established two stages of experiments on partial intervention data withone partial seed and five partial seeds.KandTwere parameters for selecting positive samples. We labeled the top Kcorrelated pairs orthose with scores higher than Tas positive samples. In some experiments, we randomly selectedK×Rnegative samples and trained the model alongside the positive ones. We also attemptedto train multiple models for the ensemble by selecting different negative samples. The ensembleprediction scores were the averages from these models.Evaluation scores were AUCs. In the first stage, we observed that top-performing methods mighthave controversial results in K562 and RPE1 and close scores (Table 2). These methods were se-lected for the second stage evaluation, where we determined the final submission (Table 3).4 D ISCUSSIONIn summary, we developed a supervised algorithm to solve the unsupervised gene causality pre-diction problem. Our experiments demonstrated the model’s ability to learn the relationships thatdetermined causalities from the expression data and correct false positive and false negative samplesfrom initial labels. The model might benefit from the uncertainty of the initial labels, as includingmore moderately correlated pairs as positive samples could improve performance. We observedabout 0.1 to 0.2 AUC score improvements compared to GRNBoost and DCDI baseline models, inwhich we also selected the top 1000 pairs as outputs.We attempted to incorporate the correlation matrix into the baseline algorithms. Since GRNBoosthad the highest Wasserstein scores when only considering observational data, we first selected20,000 candidates with the highest feature importance scores from the model trained on observa-tional data and chose 1000 based on correlation scores. However, this approach failed to surpassdirect correlation usage. As the number of candidates in the first selection increased, performanceapproached the correlation results, suggesting that the GRNBoost model might not provide infor-mation beyond correlations.2Under review at the GSK.ai CausalBench challenge (ICLR 2023)Table 2: Performances of 1 partial seed dataK or T N R Normalize Ensemble K562 RPE1Top 1000 absolute correlation (baseline) 0.2890 0.3397500 1000 2 / / 0.1861 0.30402000 1000 2 / / 0.2393 0.33522000 1000 3 / True 0.2561 0.36082000 2000 3 / / 0.2278 0.27675000 1000 2 / / 0.2524 0.35525000 1000 2 / True 0.2614 0.35415000 1000 3 / True 0.2635 0.35987000 1000 3 / / 0.2684 0.36087000 1000 AllNeg / / 0.2826 0.38467000 1000 AllNeg normalize / 0.3023 0.37447000 1000 AllNeg quantile / 0.2843 0.37680.1 1000 AllNeg normalize / 0.3148 /0.2 1000 AllNeg normalize / 0.3072 /Table 3: Performances of 5 partial seeds dataK or T N R Normalize Ensemble K562 RPE1Top 1000 absolute correlation (baseline) 0.2922 0.32555000 1000 AllNeg / / 0.2930 0.36725000 1000 AllNeg normalize / 0.3062 0.36555000 1000 AllNeg quantile / 0.2992 0.36327000 1000 AllNeg / / 0.2944 0.36590.1 1000 AllNeg normalize / 0.3265 0.37800.2 1000 AllNeg normalize / 0.3138 0.3614For the DCDI algorithms, we tried replacing the initial adjacency matrix and the Gumbel adjacencymatrix with knowledge from the correlation matrix. The improvement over the baseline was nearly0.1 but still worse than directly using the correlation matrix. Additionally, the algorithm seemedvulnerable to node numbers. We were unable to increase gene numbers for each partition as theprogram reported overflow issues.AUTHOR CONTRIBUTIONSYG and KD design, implement the algorithm; write, and proofread the report.3 |
hYT_pgTxjrR | Under review at the GSK.ai CausalBench challenge (ICLR 2023)CHALLENGE REPORTAnonymous authorsPaper under double-blind reviewABSTRACTThis brief report describes an approach to modify the graph inference function forthe causal bench challenge which is based on the dataset described in Chevalleyet al. (2022).1 M ETHOD OVERVIEWWe observe that for the DCDI algorithm the genes are partitioned in small groups and the algorithmis applied to those groups independently. Even for rather small number of genes, 500 say, and 50nodes per partition element, the probability of a specific edge to be included in one partition is only10%. Thus a good choice of partitions can greatly increase the number of suitable candidate edgesthe DCDI algorithm can potentially find. Thus we constructed clusterings based on similarities ofgenes which might indicate closeness in the causal structure and therefore potentially graph edges.We then ran DCDI on the individual clusters.We consider two suggestions to obtain the clustering. First we definedd(k, l) = 1−corrcoef( Xk, Xl) (1)where XkandXkare the expression of gene kandl. Then we used spectral clustering using d.We fixed the average cluster size navgand the maximal cluster size nmaxand split too large clustersrandomly in two subclusters. In addition we considered the mean shifts between environments byμ(i)k=E(i)(Xk)−E(Xk) (2)where idenotes interventional distribution i. For each gene kwe thresholdeds(i)k=(1if|μ(i)k|is larger than 90% of the|μ(j)k|,0else.(3)We define the similarity matrix Skl=Pis(i)ks(i)l. Then we construct partitions by randomly se-lecting a cluster seed (a randomly selected gene k1) and set C1={k1}and then greedily addingnodes kn+1to the cluster Cnsuch that kn+1∈argmaxlPki∈CnSkiluntil a fixed cluster size navgis reached. This is repeated for 3 times. Again, the rationale is that genes whose expression levelschange in a similar pattern for the provided interventional data might be close in the causal graph.For the total of 4 partitions we run the DCDI algorithm and then threshold the edge probabilities foreach run byp→RELU( p−.5), (4)i.e., we keep the information about edges with probability at least .5 predicted by the algorithm. Inthe end we add up the thresholded edge probabilities over the four partitions (this favours edges thatend up in the same cluster for several partitions which is intended). The 2000 edges with the highestaggregated probabilities are returned by the algorithm. We chose navg= 30 as the average sizeor fixed size for the clusters and nmax= 50 as the maximal size after the script crashed for largerpartition sizes.All other parts of the inference function remained the same as in the provided code.REFERENCESMathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXivpreprint arXiv:2210.17283 , 2022.1 |
hFx9EUs320I | Under review at the GSK.ai CausalBench challenge (ICLR 2023)@C AUSAL BENCH CHALLENGE 2023 - M INOR IM-PROVEMENTS TO THE DIFFERENTIABLE CAUSAL DIS-COVERY FROM INTERVENTIONAL DATA MODELAnonymous authorsPaper under double-blind reviewABSTRACTFor the creation of new drugs, understanding how genes interact with one an-other is crucial. Researchers can find new potential drugs that could be utilisedto treat diseases by looking at gene-gene interactions. Scale-based research ongene-gene interactions proved challenging in the past. It was necessary to mea-sure the expression of thousands or even millions of genes in each individual cell.Recently, high-throughput sequencing technology has made it possible to detectgene expression at this level. These advances have led to the development of newmethods for inferring causal gene-gene interactions. These methods use single-cell gene expression data to identify genes that are statistically associated witheach other. However, it is difficult to ensure that these associations are causal,rather than simply correlated. So, the CausalBench Challenge seeks to improveour ability to understand the causal relationships between genes by advancing thestate-of-the-art in inferring gene–gene networks from large-scale real-world per-turbational single-cell datasets. This information can be used to develop new drugsand treatments for diseases. The main goal of this challenge is to improve oneof two existing methods for inferring gene-gene networks from large-scale real-world perturbational single-cell datasets: GRNBoost or Causal Discovery fromInterventional Data (DCDI). This paper will describe three small improvementsto the DCDI baseline.1 I NTRODUCTIONCausal inference is a fundamental problem in science. Experiments are conducted in all fields ofresearch to understand the underlying causal dynamics of systems. This is motivated by the desireto take actions that induce a controlled change in a system. However, studying causality in real-world environments is often difficult because it generally requires either the ability to interveneand observe outcomes under both interventional and control conditions, or the use of strong anduntestable assumptions that cannot be verified from observational data alone.To address these problem, CaualBench Chevalley et al. (2022) was introduced. CausalBench is acomprehensive benchmark suite for evaluating network inference methods on perturbational single-cell RNA sequencing data. It includes two curated, openly available datasets with over 200,000interventional samples each, a set of meaningful benchmark metrics, and baseline implementationsof relevant state-of-the-art methods. The CausalBench challenge also provides two different base-line methods for inferring causal relationship: the GRNBoost Aibar et al. (2017) and the DCDIBrouillard et al. (2020), and proposed changing one of the algorithms to improve its performance.The GRNBoost is a method for inferring gene regulatory networks from observational data. It canbe improved by using interventional data, and the DCDI is a method for inferring gene regulatorynetworks from interventional data. It can be improved by tuning its parameters and by using moredata. In this work, I chose to modify the DCDI baseline and apply three small modifications to thealgorithm that are introduced in section 2.1Under review at the GSK.ai CausalBench challenge (ICLR 2023)2 M ETHODOLOGY2.1 G REEDY PARTITIONING ALGORITHMIn the baseline implementation of the DCDI, the genes were partitioned into random independentsub-graphs, since DCDI can’t handle the full graph as it does not scale well in terms of numberof nodes. This partitioning scheme sacrifices possible causal links between genes in different sub-graphs to make the DCDI algorithm more tractable. So, to minimize the loss of any valid causallinks, we need the partitioning algorithm to group the genes such that the genes in each sub-graphare related to each other as much as possible. The basic idea of the developed partitioning algorithmis to develop a measure of relationship between every pair of genes ( adjin the algorithm below),then after initializing the sub-graphs with random genes, we divide the genes into partitions usinga greedy algorithm, i.e. a gene is assigned to a sub-graph where it has the maximum possiblerelationship with all other genes.The Greedy p a r t i t i o n i n g a l g o r i t h m :# i n i t i a l i z e t h e a l g o r i t h m p a r a m e t e r sp a r t i t i o n l e n g t h = i n t ( l e n ( i n d i c e s ) / s e l f . g e n e p a r t i t i o n s i z e s )i n d i c e s = l i s t ( r a n g e ( l e n ( gene names ) ) )used = [ F a l s e f o r i i n r a n g e ( l e n ( i n d i c e s ) ) ]random . s h u f f l e ( i n d i c e s )# i n i t i a l i z e t h e a d j a c e n c y m a t r i xa d j = ( e x p r e s s i o n m a t r i x >0). a s t y p e ( i n t )a d j = n o r m a l i z e ( adj , norm = ’ l2 ’ , a x i s =0)a d j = np . matmul ( np . t r a n s p o s e ( a d j ) , a d j )# i n i t i a l i z e p a r t i t i o n s wit h random genesp a r t i t i o n s = [ ]f o r i i n r a n g e ( p a r t i t i o n l e n g t h ) :p a r t i t i o n s = p a r t i t i o n s + [ [ i n d i c e s [ i ] ] ]used [ i n d i c e s [ i ] ] = True# d i v i d e t h e genes i n t o p a r t i t i o n sw h i l e n o t a l l ( used ) :f o r i i n r a n g e ( p a r t i t i o n l e n g t h ) :i f a l l ( used ) :b r e a km a x d i s t , max ind = −1 , −1f o r j i n r a n g e ( l e n ( i n d i c e s ) ) :i f n o t used [ i n d i c e s [ j ] ] :d i s t = 0f o r k i n p a r t i t i o n s [ i ] :d i s t = d i s t + a d j [ k , j ]i f d i s t >m a x d i s t :m a x d i s t = d i s tmax ind = i n d i c e s [ j ]p a r t i t i o n s [ i ] = p a r t i t i o n s [ i ] + [ max ind ]used [ max ind ] = True# r e t u r n t h e p a r t i t i o n sr e t u r n p a r t i t i o n s2.2 A UGMENTING THE DATAIn this work, the data is augmented to be the double of its original size. The augmentation algorithmis simple, you randomly select two samples with the same intervention, and average these twosamples and add it as a new sample.2.3 T HEDEEPSIGMOIDAL FLOW MODEL PARAMETER TUNINGIn the baseline model, the sigmoidal flow has two conditional layers with 15 dimensions each, andtwo flow layers with 10 dimensions each. However, it is a rule of thumb that each variable in a neural2Under review at the GSK.ai CausalBench challenge (ICLR 2023)network needs 25 examples to be trained well and to produce similar results across multiple runs, sothe dimensions of the conditional and flow layers are set according to a simple heuristic with upperand lower bounds. The heuristic: X=plen(intervention )/25/3/2/2/1.5, and the dimension ofthe conditional layer is set to be min (18, max (5, round (1.5∗X))), and the dimension of the flowlayer is set to be min (12, max (3, round (X)))3 C ONCLUSION AND FUTURE WORKIn this work, three minor improvements to the DCDI baseline were introduced: new partitioningalgorithm for the genes, data augmentation scheme and parameter selection formulas for the deepsigmoidal flow model. These modifications improved the performance of the DCDI baseline on thepublic test set. For future work, different measure of relationships between genes can be explored inthe partitioning algorithm, also, a tractable more optimal partitioning algorithm can also be derivedother than the proposed greedy algorithm.REFERENCESSara Aibar, Carmen Bravo Gonz ́alez-Blas, Thomas Moerman, V ˆan Anh Huynh-Thu, Hana Imri-chova, Gert Hulselmans, Florian Rambow, Jean-Christophe Marine, Pierre Geurts, Jan Aerts,et al. Scenic: single-cell regulatory network inference and clustering. Nature methods , 14(11):1083–1086, 2017.Philippe Brouillard, S ́ebastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and AlexandreDrouin. Differentiable causal discovery from interventional data. Advances in Neural InformationProcessing Systems , 33:21865–21877, 2020.Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXivpreprint arXiv:2210.17283 , 2022.3 |
Wf0QRYUkhwV | CATRAN:ULTRA -LIGHT NEURAL NETWORKFOR PREDICTING GENE -GENE INTERACTIONSFROM SINGLE -CELL DATAAnonymous authorsPaper under double-blind reviewABSTRACTPart of the difficulty of learning a gene-regulatory network from expression data isrelated to the fact that edges in such a network represent different interactions witha different effect size. Therefore modeling gene associations requires learning anindividual function for each pair of interacting genes. This may greatly inflate thenumber of parameters in a model and lead to insufficient generalization. In thispaper we propose a method for gene regulatory network inference, called CaTran(Causal Transformer), which avoids explicitly learning pairwise relation betweengenes, which allows it to significantly reduce the size of the model. The keyfeature of this approach is learning for each gene a low dimensional embeddingand then using a self-attention mechanism to estimate its relation to other genes.Our method is applicable for both observational data and data with interventions.For the latter it implements a differentiable gene importance test and forces atten-tion values to be in accordance with it. Because the gene regulatory network inCaTran is learned as a soft adjacency matrix, it allows sampling graphs with arbi-trary number of edges based on a set threshold. Comparison of these graphs withthe gene networks from databases showed that even for large graphs the edges arepredicted with high precision.1 M ODEL DESCRIPTIONHere we present our solution to CausalBench challenge (Chevalley et al., 2022).1.1 D ATA PREPROCESSINGOur experiments have shown that any additional preprocessing of the data at most yields no in-crease in model performance as compared to running it using raw counts. We have tried differentmethods of normalization including scanpy standard pipeline (Wolf et al., 2018) and CLR transform(Stoeckius et al., 2017). The decrease in performance is likely to be associated with the spuriouscorrelation patterns which arise in data after normalization. Then we also tried imputing data usingvarious techniques such as MAGIC (Dijk et al., 2018) and SVD based imputation (replacing zerovalues with inverse SVD transform). The ineffectiveness of applying these methods indicates theimportance of zeros in data as a biological signal for predicting gene regulatory networks (Jiang etal., 2022).1.2 T RAINING OBJECTIVECaTran is built upon the DCDI framework (Brouillard et al., 2020) simultaneously simplifying itand regularizing its behavior. From its predecessor, CaTran inherits the basic outline of the learningobjective. CaTran does not directly optimize inference of gene interactions but instead solves geneexpression prediction tasks. And in the end it uses some of the model parameters as a proxy forgene interaction scores. Unlike DCDI, however, CaTran does not encode these scores explicitlyas a learnable adjacency matrix but computes them using self-attention mechanism. Another keydistinction of CaTran from DCDI is that instead of modeling distribution of gene expression it1directly predicts the expression of genes in a minibatch. We did this because the expression of genesin single-cell data does not follow any parametric probability distribution.The model is trained using mini-batches which include a subset of cells and a subset of genes.The typical size of a mini-batch is 2048 cells and 500 genes. If the number of genes is less than500 genes, then the mini-batch includes all genes. Using large batches with more than 1000 genesresulted in decreased performance. In each mini-batch a fraction of genes sampled randomly isperturbed by shuffling values between selected genes. Initially, we tried zeroing out these genes butthe new strategy yielded better results. We also experimented with augmenting different fractionsof the input and established empirically that hiding expression of as much as 80% genes leads tobetter results. Overall, this strategy is reminiscent of how the language models such as languagetransformers are trained (Devlin et al., 2019).The objective of a neural network is to predict the true values of genes with augmented expression.And so its loss function consists of three terms, two of which correspond to this task. The modelseparately calculates the predicted expression of augmented genes and genes with unaugmentedexpression and calculated Huber loss (Gokcesu & Gokcesu, 2021). These two terms combined withdifferent weights, correspondingly 0.7 and 0.3. The second term is added to control that the modelwill not forget the true values of gene expression of genes with unaugmented expression. The choiceof Huber loss rather than MSE is crucial to maintain high performance of the program because itreduces the effect of outliers.To optimize the given objective we used Lion optimizer(Yazdani & Jolai, 2016). We compared itto AdamW(Loshchilov & Hutter, 2019) and found it more preferable. By default we use it for 25epochs with low learning rate (0.001) and weight decay (0.05). The model weights are initializedwith values from normal distribution with zero mean and the standard deviation of 0.001. The choiceof this initialization strategy was dictated by the fact that we used SiLU as an activation function(Elfwing et al., 2017).1.3 C ATRAN ARCHITECTUREThe guiding principle for implementing CaTran architecture (Figure 1A) was the idea that interac-tions between genes can be encoded in learnable gene embeddings. This helps to avoid learningthese interactions explicitly. In contrast the original DCDI approach implements learning the wholeadjacency matrix which is quadratic to the number of genes. Similarly, CellOracle trains a separatelinear model for each network edge (Kamimoto et al., 2023). In our model we compress this infor-mation in low dimensional embeddings. The manual search indicated that the optimal embeddingsize is 40. Though it is a very robust hyperparameter and its alterations do not affect the performanceof the program dramatically. Using embeddings also allows to reduce the number of genes used ina mini-batch.CaTran next uses these embeddings to estimate interactions between genes using self-attention.We tried its different implementations and in the end came up with the following structure. Theembeddings are passed to a linear layer which uses the same weights to transform each embedding,then the matrix of dot products between these embeddings is estimated. Then we apply softmaxto this matrix along the dimension which conceptually represents the incoming edges in a generegulatory network. The resulting scores approximate gene interaction scores. We tried binarizingthis matrix based on a selected threshold as proposed by DCDI but it led to a drop in performance.This indicates that gene-gene connectivity on its own is not enough to model gene interactions.After attention weights have been estimated, the model modifies embeddings by adding to them geneexpression values. They are then passed through two linear layers with batch normalization layersbetween them but without activation. This empirically led to better results, which can be related tothe issues with numerical instability, since the embeddings are initialized with very small values.Then modified embeddings are passed to an attention block (Figure 1B), which updates each geneembedding based on embeddings of other genes using the precomputed attention weights. They arethen passed to batch normalization layer, linear layer, another batch normalization layer and in theend to a non-linear activation function. Finally, the output of the attention block is passed to twolinear layers which produce the output.2Figure 1: Figure 1. Schematic of CaTran.AThe basic outline of the model. BSchematic of the attention block.1.4 I NTERVENTIONAL LOSSThough CaTran is able to make accurate predictions in observational regime, its true power isachieved when used with interventional data. To make use of the knowledge about perturbed genes,we introduce interventional loss term into our model. Its purpose is to make attention weights followimportance scores calculated based on the analysis of associations between a perturbed gene and allother genes. The idea behind these scores is inspired by Wu et al., (2023). In essence its premiseis that if one gene is associated with another, then its expression should help the model to makeaccurate predictions. And so for each perturbed gene in a mini-batch we estimate an error of geneexpression predictions within cells where this gene is active and within cells where it was turned off,then we take a ratio of these two error estimates, subtract one and transform using sigmoid:Sigmoid (Huber (noninterv pred, non interv true )Huber (interv pred, interv true )/)−1). Then we penalize attention coefficients using Huber loss if they deviate from the importance scores.1.5 R ETRIEVING AN ADJACENCY MATRIXIn the end of the training CaTran learns embeddings, which can be used to predict an associationbetween genes. In order to get an adjacency matrix, we calculate pairwise dot products betweenembeddings and then transform them using softmax. Finally, CaTran ranks all edges in this softadjacency matrix by their attention weight and sets 1 to top 1000 genes and 0 to the rest. One canalso easily sample graphs with an arbitrary number of edges.CaTran outputs directed graphs, however unlike DCDI it allows the existence of cycles in the graph.It was done intentionally, since as we noticed the biological networks do not conform to acyclicityconstraints.1.6 I MPLEMENTATION DETAILSThe model is implemented using PyTorch and PyTorch Lightning. As it follows from the title of thispaper our model uses comparatively few learnable parameters. The total number of parameters canbe estimated using the following formula: 9000 + 40 ∗number ofgenes .32 C ITATIONSBrouillard, P., Lachapelle, S., Lacoste, A., Lacoste-Julien, S., & Drouin, A. (2020).Differentiable Causal Discovery from Interventional Data (arXiv:2007.01754). arXiv.https://doi.org/10.48550/arXiv.2007.01754Chevalley, M., Roohani, Y ., Mehrjou, A., Leskovec, J., & Schwab, P. (2022). Causal-Bench: A Large-scale Benchmark for Network Inference from Single-cell Perturbation Data(arXiv:2210.17283). arXiv. https://doi.org/10.48550/arXiv.2210.17283Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training ofDeep Bidirectional Transformers for Language Understanding (arXiv:1810.04805). arXiv.https://doi.org/10.48550/arXiv.1810.04805Dijk, D. van, Sharma, R., Nainys, J., Yim, K., Kathail, P., Carr, A. J., Burdziak, C., Moon, K. R.,Chaffer, C. L., Pattabiraman, D., Bierie, B., Mazutis, L., Wolf, G., Krishnaswamy, S., & Pe’er, D.(2018). Recovering Gene Interactions from Single-Cell Data Using Data Diffusion. Cell, 174(3),716-729.e27. https://doi.org/10.1016/j.cell.2018.05.061Elfwing, S., Uchibe, E., & Doya, K. (2017). Sigmoid-Weighted Linear Units for Neu-ral Network Function Approximation in Reinforcement Learning (arXiv:1702.03118). arXiv.https://doi.org/10.48550/arXiv.1702.03118Gokcesu, K., & Gokcesu, H. (2021). Generalized Huber Loss for Robust Learning and its EfficientMinimization for a Robust Statistics (arXiv:2108.12627). arXiv. http://arxiv.org/abs/2108.12627Jiang, R., Sun, T., Song, D., & Li, J. J. (2022). Statistics or biology: The zero-inflation controversyabout scRNA-seq data. Genome Biology, 23(1), 31. https://doi.org/10.1186/s13059-022-02601-5Kamimoto, K., Stringa, B., Hoffmann, C. M., Jindal, K., Solnica-Krezel, L., & Morris, S. A. (2023).Dissecting cell identity via network inference and in silico gene perturbation. Nature, 614(7949),Article 7949. https://doi.org/10.1038/s41586-022-05688-9Loshchilov, I., & Hutter, F. (2019). Decoupled Weight Decay Regularization (arXiv:1711.05101).arXiv. https://doi.org/10.48550/arXiv.1711.05101Stoeckius, M., Hafemeister, C., Stephenson, W., Houck-Loomis, B., Chattopadhyay, P. K., Swerd-low, H., Satija, R., & Smibert, P. (2017). Simultaneous epitope and transcriptome measurement insingle cells. Nature Methods, 14(9), Article 9. https://doi.org/10.1038/nmeth.4380Wolf, F. A., Angerer, P., & Theis, F. J. (2018). SCANPY: Large-scale single-cell gene expressiondata analysis. Genome Biology, 19(1), 15. https://doi.org/10.1186/s13059-017-1382-0Wu, A. P., Markovich, T., Berger, B., Hammerla, N., & Singh, R. (2023). Causally-guided Regularization of Graph Attention Improves Generalizability (arXiv:2210.10946). arXiv.http://arxiv.org/abs/2210.10946Yazdani, M., & Jolai, F. (2016). Lion Optimization Algorithm (LOA): A nature-inspiredmetaheuristic algorithm. Journal of Computational Design and Engineering, 3(1), 24–36.https://doi.org/10.1016/j.jcde.2015.06.0034 |
TOaPl9tXlmD | Submitted to the GSK.ai CausalBench challenge (ICLR 2023)LEARNING GENE REGULATORY NETWORKS UNDERFEWROOT CAUSES ASSUMPTIONPanagiotis Misiakos, Chris Wendler and Markus PüschelDepartment of Computer Science, ETH Zurich{pmisiakos, wendlerc, markusp}@ethz.chABSTRACTWe present a novel directed acyclic graph (DAG) learning method for data gener-ated by a linear structural equation model (SEM) and apply it to learn from geneexpression data. In prior work, linear SEMs can be viewed as a linear transforma-tion of a dense input vector of random valued root causes (as we define). In ournovel setting we further impose the assumption that the output data are generatedvia a sparse input vector, or equivalently few root causes. Interestingly, this as-sumption can be viewed as a form of Fourier sparsity based on a recently proposedtheory of causal Fourier analysis. Our setting is identifiable and the true DAG is theglobal minimizer of the L0-norm of the vector of root causes. Application to theCausalBench Challenge shows superior performance over the provided baselines.1 I NTRODUCTIONIn this work we provide a novel DAG Learning method for the CausalBench challenge (Chevalleyet al., 2022), where the task is to learn a gene regulatory network from gene expression data. Weassume that the data are generated from a linear SEM, but we change the data generating processcompared to prior work in linear SEMs (Bello et al., 2022; Ng et al., 2020; Zheng et al., 2018; Shimizuet al., 2006). In prior work, linear SEMs can be viewed as linearly transforming an i.i.d. random,dense vector of root causes (as we will call them) associated with the DAG nodes as input and theactual data on the DAG nodes as output. In Seifert et al. (2022a;b) the root causes are considered as aform of spectrum of the DAG data, with the SEM playing the role of the inverse Fourier transform.In our work we consider the spectrum to be (approximately) sparse, i.e., assume few root causes andintroduce measurement noise in the output. Intuitively, this captures the situation that the DAG datais produced by few data-generating events whose effect percolates through the DAG.Contributions. Towards this competition we provide the following contributions.•We provide a closed form solution of the linear SEM equation which expresses the data asoutput of a linear transform obtained by a reflexive-transitive closure of the DAG’s adjacencymatrix. In this form, prior work on linear SEMs assumed a dense, random valued inputvector of root causes, as we call them.•We pose the new assumption of the input vector being sparse, i.e. the data are generatedfrom a few root causes.•We propose a novel algorithm for learning a DAG from data with few root causes. It iscalled SparseRC and based on the minimization of the L1-norm of the approximated rootcauses. We provide theoretical guarantees for our method.•We evaluate SparseRC on the CausalBench competition dataset and show that it offerssignificant improvement over provided baselines.The proofs of our theoretical claims, experimental results on synthetically generated data with fewroot causes and a complete exhibition of our approach can be found in Misiakos et al. (2023).1Submitted to the GSK.ai CausalBench challenge (ICLR 2023)2 M OTIVATIONDAG. Consider a DAG G= (V, E)with|V|=dvertices, Ethe set of directed edges, and noself-loops. The vertices are sorted topologically and we set accordingly V={1,2, ..., d}. Further,we assume a weighted adjacency matrix A= (aij)i,j∈Vof the graph, where aij= 0if there is noedge.Ais upper triangular with zeros on the diagonal and thus Ad=0.Linear SEM. Linear SEMs (Peters et al., 2017) formulate a linear data-generating process for DAGsG. A data matrix X∈Rn×dconsisting of ndata vectors (as rows) indexed by the DAG Gsatisfies alinear SEM (Ng et al., 2020; Zheng et al., 2018), with independent random noise samples N, ifX=XA+N. (1)Transitive closure. Eq.(1)can be viewed as a recurrence for computing the data values XfromN.Here, we interpret linear SEMs differently by formulating the closed form of ths recurrence. To thisend, we define A=A+A2+...+Ad−1, which is the Floyd-Warshall (FW) transitive closure ofAover the ring (R,+,·)(Lehmann, 1977), and I+Athe associated reflexive-transitive closure ofA. Since Ad=0we have (I−A)(I+A) =Iand thus can isolate Xin (1):Theorem 2.1. The linear SEM (1)computes data XasX=NI+A. (2)In words, the data values in Xare computed as the output of a linear transform, obtained by thereflexive-transitive closure of A, with the noise values Nas input.This linear transform was considered a causal inverse Fourier transform Seifert et al. (2022a;b), whichmakes the rows of Nthe spectra of the data rows in X. Since Xis uniquely determined by N, wecall the latter the root causes ofX.Few root causes. The equivalence of (1)and(2)motivates us to consider a data generation processthat differs in two ways from the prior (2). First, we assume that only a few nodes produce relevantinput that we call C, up to low magnitude noise Nc. Second, we assume that the measurement of Xis subject to noise Nx. The equation of generating data X∈Rn×dbecomesX= (C+Nf)I+A+Ns⇔X=XA+C+Nf+Ns(I−A). (3)The root causes C∈Rn×drepresent the the actual information, i.e., the relevant input data at eachnode, which then propagates through the DAG as determined specified by the SEM to produce thefinal output data X, whose measurement is subject to noise. Few root causes means that only a fewcoefficients in Care non-zero and the noises Nf,Nshave negligible magnitude.Example. We assume a river network, which is naturally represented as a DAG (flow occurs onlydownstream). The nodes i∈Vrepresent cities, and edges are rivers connecting them. We assumethat the cities can pollute the rivers. An edge weight aij∈[0,1],(i, j)∈E, captures what fractionof a pollutant inserted at ireaches the neighbour j. The data Xon the DAG, measure the pollutionat each node done every a day. The measurement is the accumulated pollution from all upstreamnodes. Within the model, the associated root causes Cshow the origin of the pollution. Sparsity in Cmeans that each day only a small number of cities pollute. Negligible pollution from other sources iscaptured by noise NcandNxmodels the noise in the pollution measurements.3 O URMETHODWe briefly discuss theoretical guarantees (see Misiakos et al. (2023) for all details) and then presentour DAG learning method including the handling of interventions.Theoretical Guarantees. Our setting based on the assumption of few root causes is identifiable asfollows:Theorem 3.1. Assume data generated via the extended linear SEM (3). We assume that the rootcauses Care independent random variables taking uniform values from [0,1]with probability p, andare= 0with probability 1−p. Then (3)translates into a linear SEM with non-Gaussian noise andthusAis identifiable due to (Shimizu et al., 2006).2Submitted to the GSK.ai CausalBench challenge (ICLR 2023)20 30 40 50 60 70 80 90 100Fraction of Intervention Set [%]00.10.20.30.4K56220 30 40 50 60 70 80 90 100Fraction of Intervention Set [%]00.10.20.3RPE1SparseRCDCDI-DSFDCDI-GRandom (k=100)Random (k=1000)Random (k=10000)Figure 1: Wasserstein distance metric (higher is better) for the learned DAGs from the datasets K562(left) and RPE1 (right) (Replogle et al., 2022) based on the CausalBench framework (Chevalley et al.,2022) with varying interventions.Given the data Xwe propose the following optimization problem to retrieve the DAG structure:minA∈Rd×d∥X−XA∥0s.t.Ais acyclic. (4)Theorem 3.2. Consider a DAG with weighted adjacency matrix A. Given a large enough, butfinite, number nof samples Xthe matrix Ais, with high probability, the global minimizer of theoptimization problem (4).SparseRC. Our method is formed as the continuous relaxation of the discrete optimization problem(4). We substitute the L0-norm from (4) with its convex approximation (Ramirez et al., 2013), theL1-norm. The acyclicity is then captured with the continuous constraint h(A) =treA⊙A−dfrom (Zheng et al., 2018):minA∈Rd×d12n∥X−XA∥1s.t.h(A) = 0. (5)Handling interventions. The gene expression data provided by the CausalBench framework cancontain interventions, either for all or for a fraction of genes. An intervention assigns a value toa gene which is independent to the expression data of its predecessors. Mathematically, the linearSEM adopting the intervention scheme is formulated with the following equation ( ⊙the elementwiseproduct):X=XA⊙M+N. (6)The masking matrix M∈Rn×dcaptures the intervention on gene iby removing the incoming edgesto node i. Thus, Mconsists of all ones, except in row i, which it is set to zero. In this case gene iisinitialized with noise according to (6), or, more generally, with some root cause value together withnoise as in (3). Since the positions of the interventions in the dataset are known the optimizationproblem becomesminA∈Rd×d12n∥X−XA⊙M∥1s.t.h(A) = 0. (7)4 C ONTEST EVALUATIONOur method appears to work competitively in synthetic data generated with a few root causes (seeMisiakos et al. (2023)) and also in the gene regulatory network dataset by Sachs et al. (2005). InFig. 1 we present our performance on the gene interaction network benchmark provided by Chevalleyet al. (2022). Our method performs better than the provided baselines Brouillard et al. (2020) andalso exhibits an upward trend which indicates that it benefits from interventions.Implementational details. For our method, we construct a PyTorch model consisting of a linearlayer, which represents the weighted adjacency matrix A. Then, given the data Xprocessed inbatches, and the interventional positions masking matrix Mwe train our model with the Adamoptimizer with learning rate λ= 10−3, to minimize the loss in (7). The final adjacency matrix isthresholded at 0.035which experimentally showed to result into more than thousands of edges asrequired by the competition guidelines.Conclusion. We presented a novel form of data generation with linear SEMs based on few rootcauses and an associated DAG learning algorithms. Our results on CausalBench suggest that theassumption of few root cases may be biologically relevant, which invites further investigation.3Submitted to the GSK.ai CausalBench challenge (ICLR 2023)REFERENCESKevin Bello, Bryon Aragam, and Pradeep Ravikumar. DAGMA: Learning DAGs via M-matrices anda Log-Determinant Acyclicity Characterization. arXiv preprint arXiv:2209.08037 , 2022.Philippe Brouillard, Sébastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and AlexandreDrouin. Differentiable causal discovery from interventional data. In H. Larochelle, M. Ranzato,R. Hadsell, M.F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems ,volume 33, pp. 21865–21877. Curran Associates, Inc., 2020.Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causalbench:A large-scale benchmark for network inference from single-cell perturbation data. arXiv preprintarXiv:2210.17283 , 2022.Daniel J Lehmann. Algebraic structures for transitive closure. Theoretical Computer Science , 4(1):59–76, 1977.Panagiotis Misiakos, Chris Wendler, and Markus Püschel. Learning dags from data with few rootcauses, 2023.Ignavier Ng, AmirEmad Ghassami, and Kun Zhang. On the role of sparsity and dag constraints forlearning linear dags. Advances in Neural Information Processing Systems , 33:17943–17954, 2020.Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundationsand learning algorithms . The MIT Press, 2017.Carlos Ramirez, Vladik Kreinovich, and Miguel Argaez. Why l1is a good approximation to l0: AGeometric Explanation. Journal of Uncertain Systems , 7(3):203–207, 2013.Joseph M Replogle, Reuben A Saunders, Angela N Pogson, Jeffrey A Hussmann, Alexander Lenail,Alina Guna, Lauren Mascibroda, Eric J Wagner, Karen Adelman, Gila Lithwick-Yanai, et al.Mapping information-rich genotype-phenotype landscapes with genome-scale perturb-seq. Cell,185(14):2559–2575, 2022.Karen Sachs, Omar Perez, Dana Pe’er, Douglas A Lauffenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science , 308(5721):523–529,2005.Bastian Seifert, Chris Wendler, and Markus Püschel. Causal Fourier Analysis on Directed AcyclicGraphs and Posets. arXiv preprint arXiv:2209.07970 , 2022a.Bastian Seifert, Chris Wendler, and Markus Püschel. Learning Fourier-Sparse Functions on DAGs.InICLR2022 Workshop on the Elements of Reasoning: Objects, Structure and Causality , 2022b.Shohei Shimizu, Patrik O. Hoyer, Aapo Hyvärinen, and Antti Kerminen. A Linear Non-GaussianAcyclic Model for Causal Discovery. Journal of Machine Learning Research , 7(72):2003–2030,2006. URL http://jmlr.org/papers/v7/shimizu06a.html .Xun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. Dags with no tears: Continuousoptimization for structure learning. Advances in Neural Information Processing Systems , 31, 2018.4 |
gpDOOAOmMe | Submitted to the GSK.ai CausalBench challenge (ICLR 2023)BETTER BOOST - INFERENCE OF GENE REGULATORYNETWORKS WITH PERTURBATION DATAAchille Nazaret & Justin HongDepartment of Computer ScienceColumbia UniversityNew York, USA{achille.nazaret,justin.hong }@columbia.eduABSTRACTThe introduction of large-scale, genome-wide, single-cell perturbation datasetsprovides the chance to learn a full gene regulatory network in the relevant cell line.However, existing gene regulatory network inference methods either fail to scaleor do not explicitly leverage the interventional nature of this data. In this work,we propose an algorithm that builds upon GRNBoost by adding an additional stepthat complements its performance in the presence of labeled, single-gene interven-tional data. Applying BetterBoost to the CausalBench Challenge, we demonstrateits superiority over the baseline methods in inferring gene regulatory networksfrom large-scale single-cell perturbation datasets. Notably, BetterBoost exhibitssignificantly improved performance when non-zero fractions of labeled interven-tions are available, highlighting the effectiveness of our approach in leveraginginterventional data for accurate gene regulatory network inference.1 I NTRODUCTIONThe introduction of large-scale, genome-wide, single-cell perturbation datasets (Replogle et al.,2022; Dixit et al., 2016) provides a valuable opportunity to learn comprehensive gene regulatorynetworks. However, existing methods for gene regulatory network inference fail to scale (Brouil-lard et al., 2020; Sethuraman et al., 2023) or lack explicit utilization of the interventional natureof this data (Moerman et al., 2019; Passemiers et al., 2022). Methods that fail to scale often havealgorithmic complexity issues, such as those encountered when computing the exponential of largematrices. On the other hand, methods capable of handling datasets with over 10,000 genes (Moer-man et al., 2019; Passemiers et al., 2022) often treat the data as observational, thereby overlookingthe valuable interventional information. While incorporating interventional data can enhance thepredictive power of models that treat the data as observational, these models fail to fully exploitcausal inference principles that aid in identifying causal relationships. To address these challengesand facilitate the advancement of causal inference methods on single-cell data, the CausalBenchframework has been developed (Chevalley et al., 2022), and the CausalBench challenge was orga-nized within the ICLR 2023 Workshop on Machine Learning for Drug Discovery. In this paper, weintroduce BetterBoost, our winning method for the CausalBench challenge.BetterBoost builds on the baselines proposed in the CausalBench framework. Among the scalablemodels that do not incorporate interventional data, we found that GRNBoost (Moerman et al., 2019)performed the best. GRNBoost defines the target gene’s parents as the target’s most predictive genesusing a prediction importance score Gi,jfrom gene ito gene j. We adapted the GRNBoost scoreGi,jinto a score Bi,jin our proposed method, BetterBoost, which leverages interventional data incomplement to observational data. The score Bi,jreduces to Gi,jwhen only observational data isavailable and improves as more interventional data becomes available.BetterBoost assumes that if the dataset was generated by a causal model, the observed data’s jointdistribution can be factorized as:p(x1. . .xG) =GYi=1p(xi|Pa(xi)). (1)1Submitted to the GSK.ai CausalBench challenge (ICLR 2023)If a candidate gene is a parent of the target, it will be a good predictor for the target, as GRNBoostassumes. But with labeled, interventional data, one can attempt to identify the true causal parents ofa given observed variable xiby looking at the effects of interventions on the candidate parents ofxi. In particular, in a sample where a candidate parent gene is knocked down, the perturbed genewill only remain a good predictor for the target gene if it is a true causal parent of the target. Hence,if knocking down a candidate gene leads to a statistically significant prediction of the target gene,it indicates strong evidence of a causal relationship directed from the candidate parent to the targetgene. We leverage the impact of knocking down candidate genes in the prediction importance scoreof BetterBoost.We find that BetterBoost performs significantly better than leading methods GRNBoost (Passemierset al., 2022) and DCDI (Brouillard et al., 2020) on provided sample data according to the chal-lenge metric, average Wasserstein distance. Below, we detail the proposed method and go over thepreliminary results of BetterBoost and relevant baselines on sample datasets.2 M ETHODSIn this section, we restate the objective of the challenge and detail the algorithm, BetterBoost.2.1 O BJECTIVEThe considered single-cell perturbational datasets each consist of a matrix of UMI counts per cell,X∈Z+N×G, and associated interventional labels, s∈ {unperturbed ,unlabeled ,1, . . . , G }N, foreach cell. Note the interventions can only affect at most one gene, which can be achieved via high-precision CRISPRi technology (Larson et al., 2013). We denote the fraction of genes g∈[G]withlabeled interventional data as ρ.Since ground truth causal network data does not exist for these datasets, a proposed causal graphis evaluated by the average Wasserstein distance which is defined as follows: for each edge in theinferred causal graph (i, j)∈ˆG, the Wasserstein distance is computed between the distribution ofXjin the unperturbed data and in the subset of data where Xiis perturbed. Therefore, the averageWasserstein distance can be written as:d(ˆG) :=1|ˆG|X(i,j)∈ˆGW1(p(xj|s= unperturbed) , p(xj|s=i)) (2)where W1denotes the first Wasserstein distance between two distributions.The space of valid causal graphs, ˆGis constrained to {ˆG:|ˆG| ≥ 1000}, but otherwise can includecycles and disconnected components.2.2 A LGORITHMWe found GRNBoost to work the best in the observational case, i.e. no labeled interventional data,but fail to improve on this metric after adding strictly more information in the form of interventionlabels. Thus, we developed a simple procedure for leveraging any available intervention labels. Aspreviously mentioned, we assume that the true causal graph, Gis a directed, acyclic graph (DAG),and therefore the joint distribution factorizes as in Equation 1. To identify if gene j∈[G]is astrong candidate parent gene for a given target gene i∈[G], we look if jis predictive of the targetgeneiin the dataset formed by observational data and the interventional data on gene j. For a truecausal parent, we expect that when jis knocked down, there will be a statistically significant shiftin the distribution of observed UMIs of gene ibetween observational and interventional data. Sincewe held no priors on the nature of causal effects, we chose to use the Kolmogorov-Smirnov (KS)test (Massey, 1951) to test these distributional shifts between observational and interventional data.Additionally, we used the Benjamini-Hochberg procedure to correct the p-values for multiple testing(Benjamini & Hochberg, 1995).To formulate the new score used by BetterBoost to rank the impact of gene ion gene j, we write Gi,jthe predictive score of gene ion gene jcomputed by GRNBoost, and pi,jthe Benjamini-Hochbergcorrected KS test p-value of the impact of knocking down gene ion gene j. If no interventional2Submitted to the GSK.ai CausalBench challenge (ICLR 2023)Table 1: Average Wasserstein Distance of Methods on RPE1 Perturb-seq datasetMethod ρ= 0 ρ= 0.25ρ= 0.5ρ= 0.75ρ= 1.0DCDI 0.126 0.126 0.127 0.125 0.130GRNBoost 0.115 0.106 0.106 0.106 0.106GRNBoost-1000 0.151 0.147 0.146 0.146 0.145BetterBoost 0.151 0.398 0.531 0.599 0.636data was available on i, we set all p-values pi,∗to0.05, as to neither strongly accept nor rejecthypotheses for these interactions. We then define the score Bi,j= (−pi,j, Gi,j)that we sort fromlarger to smaller (in lexicographic order).For some desired number of edges, K, BetterBoost returns the KB:= min( K,|{(i, j) :Bi,j[0]≥−0.05}|)candidate edges with the smallest Hscore and acceptable p-values. The KBcandidateedges will have the smallest p-values for the KS test up to 0.05, which can include gene pairswhere no interventional data and hence no p-value was available. Since the p-values of these genepairs were set to 0.05, this ranking will favor in practice the edges of pairs with small p-values(obtained from combined interventional and observation data) followed by the edges with the highestGRNBoost scores Gi,j(from observational data only). Typically, this results in more of the finaledges being chosen by p-value than by GRNBoost score as more labeled interventional data becomesavailable.3 R ESULTSWe compared BetterBoost to the two suggested baseline methods, GRNBoost and DCDI, on theRPE1 perturbational data from (Replogle et al., 2022). The methods were evaluated with varyingfractions of available labeled interventional data, ranging from 0.25to1.0. In order to comply withthe challenge requirements, we choose to return K= 1000 edges for the challenge. By default,GRNBoost returns all edges with non-zero importance, so we additionally tested against a variantof GRNBoost that only returns the 1000 top importance edges.We found that for every fraction of labeled interventional data, ρconsidered, BetterBoost improvedsignificantly on the average Wasserstein metric. Additionally, we found that the improvement in themetric correlated perfectly with ρas shown in Table 1.Remark: We haven’t tuned DCDI; the reported results are from running the provided baseline.4 D ISCUSSIONOur proposed method, BetterBoost, utilizes labeled interventional data to identify the true causalparents of a given observed variable by looking at the effects of interventions on candidate parents.BetterBoost significantly outperforms leading methods GRNBoost and DCDI on provided sampledata according to the challenge metric, average Wasserstein distance. In conclusion, our resultssuggest that BetterBoost is a promising gene regulatory network inference method.BetterBoost can be extended for future work to consider the invariance property of causal relation-ships mentioned previously. Currently, if a chain of strong causal effects exists, xi→xj→xk,BetterBoost will likely assign an edge from xi→xk. However, if the interventional data on xjis present and labeled, one can identify that an edge does not exist between xiandxk. This sce-nario also exposes a shortcoming of the average Wasserstein metric, which would not penalize thepresence of such an edge in the inferred graph.REFERENCESYoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: A practical and powerfulapproach to multiple testing. Journal of the Royal Statistical Society Series B (Methodological) ,3Submitted to the GSK.ai CausalBench challenge (ICLR 2023)57(1):289–300, 1995. doi: http://dx.doi.org/10.2307/2346101. URL http://dx.doi.org/10.2307/2346101 .Philippe Brouillard, S ́ebastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and AlexandreDrouin. Differentiable causal discovery from interventional data. July 2020.Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXivpreprint arXiv:2210.17283 , 2022.Atray Dixit, Oren Parnas, Biyu Li, Jenny Chen, Charles P Fulco, Livnat Jerby-Arnon, Nemanja DMarjanovic, Danielle Dionne, Tyler Burks, Raktima Raychowdhury, Britt Adamson, Thomas MNorman, Eric S Lander, Jonathan S Weissman, Nir Friedman, and Aviv Regev. Perturb-Seq:Dissecting molecular circuits with scalable Single-Cell RNA profiling of pooled genetic screens.Cell, 167(7):1853–1866.e17, December 2016.Matthew H Larson, Luke A Gilbert, Xiaowo Wang, Wendell A Lim, Jonathan S Weissman, andLei S Qi. CRISPR interference (CRISPRi) for sequence-specific control of gene expression. Nat.Protoc. , 8(11):2180–2196, November 2013.F. J. Massey. The Kolmogorov-Smirnov test for goodness of fit. Journal of the American StatisticalAssociation , 46(253):68–78, 1951.Thomas Moerman, Sara Aibar Santos, Carmen Bravo Gonz ́alez-Blas, Jaak Simm, Yves Moreau,Jan Aerts, and Stein Aerts. Grnboost2 and arboreto: efficient and scalable inference of generegulatory networks. Bioinformatics , 35(12):2159–2161, 2019.Antoine Passemiers, Yves Moreau, and Daniele Raimondi. Fast and accurate inference of generegulatory networks through robust precision matrix estimation. Bioinformatics , 38(10):2802–2809, May 2022.Joseph M Replogle, Reuben A Saunders, Angela N Pogson, Jeffrey A Hussmann, Alexander Lenail,Alina Guna, Lauren Mascibroda, Eric J Wagner, Karen Adelman, Gila Lithwick-Yanai, Nika Ire-madze, Florian Oberstrass, Doron Lipson, Jessica L Bonnar, Marco Jost, Thomas M Norman, andJonathan S Weissman. Mapping information-rich genotype-phenotype landscapes with genome-scale perturb-seq. May 2022.Muralikrishnna G Sethuraman, Romain Lopez, Rahul Mohan, Faramarz Fekri, Tommaso Bian-calani, and Jan-Christian H ̈utter. NODAGS-Flow: Nonlinear cyclic causal structure learning.January 2023.4 |
Cx9B85IlEVR | Submitted to the GSK.ai CausalBench challenge (ICLR 2023)CAUSAL BENCH CHALLENGE : DIFFERENCES IN MEANEXPRESSIONMarcin Kowiel, Wojciech Kotlowski & Dariusz BrzezinskiInstitute of Computing SciencePoznan University of Technology{dbrzezinski,wkotlowski }@cs.put.poznan.plABSTRACTIn this write-up, we describe our solution to the 2023 CausalBench Challenge. Wedescribe our approaches to preprocessing the data, parameterizations of DCDI andGRNBoost, and modifications to the baseline algorithms.1 D ATA PRE -PROCESSING AND POST -PROCESSINGParallel to developing modifications of the baseline DCDI and GRNBoost algorithms, we consideredmodifications to the input and output data of these algorithms. In particular, we analyzed good initialvalues for the gene expression threshold and output graph size.Gene expression threshold. The gene expression threshold is used to remove genes that havea non-zero expression in less than a user-defined fraction of the samples. The default value of0.25 resulted in DCDI performance that was visibly worse than that reported in (Chevalley et al.,2022). Therefore, we changed the default expression threshold to 0.5 and used this value in furtherexperiments. Moreover, we omitted samples labeled as ‘excluded‘.Figure 1: Mean Wasserstein distance for differentsizes of GRNBoost output graphs.Output graph size. The challenge submis-sions are evaluated based on the mean Wasser-stein distance between the expression distribu-tions of connected pairs of nodes in the outputgraph. Seeing that not all pairs are equally im-portant and methods such as GRNBoost relyon sorting pairs according to importance andthen selecting only a subset of them using athreshold, we assumed that smaller graphs willbe more likely to have a higher value of themean Wasserstein distance. To verify this hy-pothesis, we plotted the mean Wasserstein dis-tance for GRNBoost graphs of different sizes.As can be noticed by looking at Figure 1, themean Wasserstein distance indeed decreases asthe number of edges in the graph grows. Al-though GRNBoost does not perfectly sort genepairs according to differences between expres-sion distributions, the results are still very good. Therefore, in further experiments, we have alwayslimited the number of edges to 1,000, which was the smallest allowed output graph according to thecompetition rules.2 T ESTED APPROACHESIn this section, we will discuss the subsequent models we tested while preparing our final Causal-Bench Challenge submission.1Submitted to the GSK.ai CausalBench challenge (ICLR 2023)DCDI and GRNBoost baselines. Before designing modifications, we ran experiments onDCDI (Brouillard et al., 2020) and GRNBoost (Huynh-Thu et al., 2010) with different parame-ters. As mentioned in the previous section, we finally settled for a gene expression threshold of 0.5and an output graph consisting of 1,000 edges. We also tested different versions of DCDI-G (whichoffered better performance than DCDI-DSF). As can be seen in the left panel of Figure 4, our resultson the RPE dataset (Tsherniak et al., 2017) are in accordance with those presented in (Chevalleyet al., 2022), i.e., DCDI-G offers the best performance followed by GRNBoost. These results servedas a reference point for our modifications.GRNBoost with intervention encoding. Our first modification involved adding information aboutinterventions to GRNBoost. GRNBoost creates multiple regressors, each one predicting the ex-pression value of a gene based on the expression values of the other genes. In its original form,GRNBoost treats all samples equally and has no notion of gene interventions. The first and sim-plest modification involved changing the expression value of perturbed genes to -100 (Figure 2, leftpanel). By doing so, our goal was to differentiate between interventions and naturally occurringzero-expression of a given gene. Since GRNBoost relies on regression trees, we did not worry aboutthe concrete intervention encoding value, as long as it separated interventions from observationalvalues. Hence we only tested the value -100. The experimental results of this modification forthe RPE dataset are presented in the right panel of Figure 4. As can be noticed, the interventionencoding strategy offered slightly better performance than the baseline GRNBoost.Figure 2: Schematic of data modifications performed to introduce intervention information to GRN-Boost. Each table presents the dataset used to train one regressor to predict the expression of geneZ based on expression values of genes X and Y .GRNBoost with intervention flag columns. The approach described in the previous paragraphhas a downside in that the intervention encoding value -100 hides the true expression of the genein the sample, thus removing some of the information from the dataset. Therefore, as our secondmodification, instead of replacing expression values, we have added a set of columns with binaryflags determining whether a particular gene was perturbed in a given sample (Figure 2, center panel).Somewhat surprisingly, this strategy of extending the dataset performed worse than interventionencoding (Figure 4).GRNBoost with only intervention flag columns. Since extending the dataset with more columnsseemed to have added more noise, we also tried another strategy—one wherein we discarded theexpression values altogether and left only intervention flags (Figure 2, right panel). This GRN-Boost modification worked significantly better than the previous two (Figure 4). Since using onlybinary (one-hot) intervention flags to predict expression boils down to estimating the means forsub-populations of the dataset, we decide to test strategies that estimate mean expression directly.Mean expression estimation. We measured the strength of causal relationship X→Yfor everygene pair X, Y , for which interventions on Xwere available. To this end, we separately calculatedfor gene Yits mean expression values ̄YOand ̄YXon the observational data and on the interventionaldata concerning perturbations of X, respectively. The difference in means, | ̄YO− ̄YX|, was used tomeasure the strength of the relationship, and then to sort gene pairs and select 1,000 pairs with thelargest differences. It turns out that this simple approach, which is essentially a regression model ofYon the intervention flag of X, turned out to significantly outperform all previously tested strategieson the RPE dataset, as seen in Figure 3. We note that for mean difference estimation, we did notemploy any gene expression threshold.2Submitted to the GSK.ai CausalBench challenge (ICLR 2023)0.00.20.40.60.80.25 0.50 0.75 1.00Fraction of intervention dataMean Wasserstein distanceAlgorithmDCDI−GGRNBoostGRNBoost exp+intGRNBoost int encGRNBoost only flagMean diffMean diff BayesFigure 3: Comparison of all algorithms. Notethat both mean difference methods (Mean diff,Mean diff Bayes) have practically the same per-formance.Mean expression estimation with Bayesiancorrection. Since some of the interventionscontained few samples, we decided to correctthe mean expression value on the interventionaldata ̄YXby employing a Bayesian estimator,treating ̄YOas the prior mean, and the vari-ance of Yon the observational data, Var(YO),as the prior variance. This effectively boilsdown to expressing the difference in means bycXY| ̄YO− ̄YX|, with Bayesian correction factorcXY=Var(YO)Var(YO)+Var( YX)/nX, where Var(YX)is the variance of Yon the interventional dataconcerning perturbations of X, while nXis thenumber of samples in that intervention. SincecXY≤1and increasing with nX, this has theeffect of discounting the mean differences forsmall interventional datasets. However, the Bayesian estimation brought only an insignificant im-provement when compared with the previous approach, essentially returning an almost identical setof top 1,000 pairs (Figure 3).Considering all of the experimental results (Supplementary Table 1) and the above analyses, ourfinal submission consisted of omitting samples labeled as ‘excluded‘, estimating the mean expres-sion of genes for each intervention, and selecting the 1,000 gene pairs with the largest expressiondifferences.3 D ISCUSSIONOur final submission consisted of a very simple algorithm that estimates the mean expression ofgenes in situations when a different gene is intervened upon. The reason why we have settled forsuch a simple method rather than a more elaborate one stems from three factors:1. the fact that this is a competition, not an exploratory analysis;2. the format of the training and testing data;3. the competition’s evaluation metric.The first factor is obvious: since we are participating in a competition, discovering new interestingcausal gene relationships becomes less important than achieving the best performance accordingto the competition rules. During the explanatory analysis and tests of various approaches, we re-alized that every step which led to the performance improvements was essentially pulling a givenmethod towards estimating the difference in means on the observational and the interventional data.Therefore, we eventually decided to use the mean estimation as the sole method for causal graphedge prediction. The second factor, the data format, required us to predict interactions only be-tween genes that were present in the input data and which, in most cases, had interventions. Withoutexpression data for genes without interventions, there was no reason to predict causality betweenunperturbed genes. Finally, for a well-behaved predictor, the value of the competition evaluationmetric will decrease as the number of predicted edges increases; therefore, it was always optimal topredict as few gene interactions as the competition allowed.The above factors made our submission much simpler, but also much less applicable to industryneeds. To alleviate the above-mentioned issues, we believe it would be necessary to require con-testants to predict causal relations between genes that have interventions as well as those pairs thatare purely observational. For that to be possible, the input data should have more genes without anyobservations, and the algorithm should receive as input the pairs of genes it is going to be evaluatedon. With such a setup, the organizers would be able to force predictions on observational genes fromthe training data and evaluate them based on held-out interventional data. By prespecifying, whichgene pairs the algorithm is supposed to assess, the problem of predicting the smallest possible graphwould also disappear. In general, gene pairs could be evaluated in three cross-validation or holdoutsettings ( cv1,cv2,cv3) as proposed for synthetic lethality pairs by Wang et al. (2022).3Submitted to the GSK.ai CausalBench challenge (ICLR 2023)REFERENCESPhilippe Brouillard, S ́ebastien Lachapelle, Alexandre Lacoste, Simon Lacoste-Julien, and AlexandreDrouin. Differentiable causal discovery from interventional data. Advances in Neural InformationProcessing Systems , 33:21865–21877, 2020.Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec, and Patrick Schwab. Causal-bench: A large-scale benchmark for network inference from single-cell perturbation data. arXivpreprint arXiv:2210.17283 , 2022.Vˆan Anh Huynh-Thu, Alexandre Irrthum, Louis Wehenkel, and Pierre Geurts. Inferring regulatorynetworks from expression data using tree-based methods. PloS one , 5(9):e12776, 2010.Aviad Tsherniak, Francisca Vazquez, Phil G Montgomery, Barbara A Weir, Gregory Kryukov,Glenn S Cowley, Stanley Gill, William F Harrington, Sasha Pantel, John M Krill-Burger, et al.Defining a cancer dependency map. Cell, 170(3):564–576, 2017.Shike Wang, Yimiao Feng, Xin Liu, Yong Liu, Min Wu, and Jie Zheng. NSF4SL: negative-sample-free contrastive learning for ranking synthetic lethal partner genes in human cancers. Bioinfor-matics , 38(S2):ii13–ii19, 2022.A A PPENDIXBaseline Modification0.25 0.50 0.75 1.000.25 0.50 0.75 1.000.00.20.40.6Fraction of intervention dataMean Wasserstein distanceAlgorithmDCDI−GGRNBoostGRNBoost expression + interventionGRNBoost intervention encodingGRNBoost only intervention flagFigure 4: Mean Wasserstein distance of baseline algorithms and modifications of GRNBoost on theRPE dataset. Left panel: baseline algorithms—DCDI-G and GRNBoost. Right panel: GRNBoostmodifications.4Submitted to the GSK.ai CausalBench challenge (ICLR 2023)Table 1: Experimental results of all the algorithms on the RPE dataset.Algorithm Fraction of intevention data Mean Wasserstein distanceDCDI-G 0.25 0.1771DCDI-G 0.50 0.1755DCDI-G 0.75 0.1890DCDI-G 1.00 0.1845GRNBoost 0.25 0.1462GRNBoost 0.50 0.1471GRNBoost 0.75 0.1473GRNBoost 1.00 0.1520GRNBoost intervention encoding 0.25 0.1669GRNBoost intervention encoding 0.50 0.1679GRNBoost intervention encoding 0.75 0.1662GRNBoost intervention encoding 1.00 0.1548GRNBoost expression + intervention 0.25 0.1510GRNBoost expression + intervention 0.50 0.1513GRNBoost expression + intervention 0.75 0.1604GRNBoost expression + intervention 1.00 0.1598GRNBoost only intervention flag 0.25 0.3913GRNBoost only intervention flag 0.50 0.4995GRNBoost only intervention flag 0.75 0.5542GRNBoost only intervention flag 1.00 0.5855Mean diff 0.25 0.4697Mean diff 0.50 0.6357Mean diff 0.75 0.7541Mean diff 1.00 0.8130Mean diff Bayes 0.25 0.4699Mean diff Bayes 0.50 0.6354Mean diff Bayes 0.75 0.7542Mean diff Bayes 1.00 0.81285 |
GtyQbLUUagE | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023Full Stack Optimization of Transformer InferenceSehoon Kim∗1, Coleman Hooper∗1, Thanakul Wattanawong1, Minwoo Kang1, Ruohan Yan1, Hasan Genc1Grace Dinh1, Qijing Huang2, Kurt Keutzer1, Michael W. Mahoney134, Yakun Sophia Shao1, Amir Gholami131University of California, Berkeley2NVIDIA3ICSI4LBNLAbstract —Recent advances in state-of-the-art neural networkarchitecture design have been moving toward Transformer mod-els. These models achieve superior accuracy across a wide rangeof applications in computer vision, natural language processing,and speech recognition. This trend has been consistent overthe past several years since Transformer models were originallyintroduced. However, the amount of compute and bandwidthrequired for inference of recent Transformer models is growingat a significant rate, and this has made their deployment inlatency-sensitive applications challenging. As such, there has beenan increased focus on making Transformer models more efficient,with methods that range from changing the architecture design,all the way to developing dedicated domain-specific accelerators.In this work, we pursue a full-stack approach to optimiz-ing Transformer inference. We analyze the implications of theTransformer architecture on hardware, including the impact ofnonlinear operations such as Layer Normalization, Softmax, andGELU, as well as linear operations, and we use this analysisto optimize a fixed Transformer architecture. We assess thechallenges with finding the right mapping and scheduling of oper-ations for Transformer models, and pursue neural architecturesearch to further optimize the Transformer network. We findthat a full-stack co-design approach with the aforementionedmethods can result in up to 88.7 ×end-to-end speedup withminimal performance degradation for Transformer inference.More details can be found in our full paper [27], which includes(1) a comprehensive analysis of Transformer workloads, (2) anextensive survey of the current hardware and software solutionson efficient Transformer inference, and (3) case studies to quan-tify the advantages of co-design and co-optimization techniquesacross the stack on full-stack Transformer inference.I. I NTRODUCTIONDeep learning models have scaled up to billions of param-eters and billions of multiply-accumulate operations duringboth training and inference. As a result, there has been agrowing interest in computing these models efficiently and indeploying these compute and memory-intensive workloads onresource-constrained edge devices. These edge devices havetight energy and memory constraints, and the correspondingapplications that leverage deep learning models also often havereal-time latency constraints.The demand for fast and efficient computation, coupled withthe characteristics of deep learning workloads that involve asmall set of distinct operations with substantial data reuse,have led to the use of hardware accelerators. A multitude ofenterprise deep learning accelerators, such as [1], [3], [17],[23], [25], [28]–[30], [37], [44], [46], have been developed andintegrated into commodity hardware by industry in the past∗Equal contribution. sehoonkim@berkeley.edu, chooper@berkeley.edudecade. This parallels many research accelerators developed inacademia [7]–[10], [16], [18]–[20], [36]. Together with hard-ware accelerator development, the software frameworks [2],[5], [24], [34] and compilers [6], [32], [42] for deploying var-ious deep learning algorithms have also enhanced and matured.These tools enable the execution of deep learning algorithmson accelerators, and they perform mapping optimizations toimprove the performance and efficiency of the full deeplearning pipeline. Nonetheless, the fast-evolving deep learningalgorithms still keep introducing new demands for hardwareand software support, as well as their co-optimization, tosatisfy various deployment constraints.The recent rise in popularity of Transformers and largelanguage models [4], [12], [14], [15], [21], [38]–[41], [43],[45] for solving various natural language processing (NLP)tasks presents a brand new set of challenges in the designof accelerators as well as frameworks. There has been an in-creased focus on making Transformer inference more efficient,especially due to their growing size and run-time complexity.However, there is still a lack of understanding regarding theworkload characteristics of Transformer architectures, and thusof the design principles necessary for effectively running thesemodels, when compared to the more well-known convolutionalneural network (CNN) architectures. For instance, comparedto the conventional CNN-focused design, Transformers aremostly composed of matrix multiplications (matmuls) togetherwith memory-intensive nonlinear operations. In addition, thecomputational graph and dataflow of Transformer models aremore complex than that of CNNs, with more types of operationnodes, as well as more dataflow splits and concatenations.All these challenges require us to undertake a comprehensiveanalysis of the current hardware and software solutions as wellas the various design trade-offs for Transformer inference.Our analysis yielded several key findings:•We adapt Gemmini [19], which was originally designed forCNN workloads, for Transformer inference. Without modi-fications, the primary bottleneck for running Transformerson CNN accelerators is the time spent on floating-pointnon-linear operations. However, by adapting Gemmini tosupport an integer-only BERT variant [26], and tuning thememory configuration, we improve performance by 39.6 ×.•Fusing BatchNorm with the neighboring convolution inCNNs is straightforward. However, the benefits of fusingoperations in the Transformer architecture with the pre-ceding matmuls depends on the particular operation as itcan impose constraints on the mapping, leading to runtime1LayerNormW1Encoder OutputAttention OutputW2 Norm + Add WQWKWVSoftmaxEncoder InputConcatenatedFFNx d dx dFFNdx ldx ld/hx llx ldx ddx ddx ddx ldx ldx lTransposelx d/hd/hx ldFFNx l dx ldx ld/hx ld/hx ldFFNx lFeed -Forward Network (FFN) ModuleGELUWOutLayerNormAttention OutputNorm + Adddx ddx ldx ldx lMHA ModuleFFN ModuleFig. 1: Map of the computations performed in (Top) the multi-head attention (MHA) module and (Bottom) the feed-forwardnetwork (FFN) module in the Transformer encoder blockcosts that outweigh the gains from operator fusion.•We apply automated neural architecture search (NAS)to search for efficient and high-performance Transformerarchitectures on Gemmini-driven hardware. NAS finds anarchitecture that improves EDP by 10.6 ×with minimaldegradation on target benchmark. Combined with the hard-ware improvement, we achieve 88.7 ×end-to-end speedup.II. H ARDWARE ARCHITECTURE OPTIMIZATIONWe first illustrate how architects familiar with mainstreamaccelerators for convolutional, vision-based workloads can de-sign state-of-the-art Transformer accelerators. We start with afairly typical CNN accelerator generated by the Gemmini [19]accelerator-generator, optimized primarily for ResNet50-likeworkloads, and we discuss changes we made to this acceler-ator and its software stack to efficiently support Transformerworkloads such as BERT. Throughout this section, we useBERT-Base as a workload. For more details, please refer toSection 3 of our full paper [27].1)Baseline Accelerator :We first generate a fairly typicalCNN accelerator with a 16 ×16 systolic array and the weight-stationary dataflow using the Gemmini accelerator-generator.The 8-bit integer weights and inputs are stored in a 256 kBlocal scratchpad memory, and the 32-bit partial sums are storedin a dual-ported 64 kB accumulator SRAM which performsmatrix additions. When DNN layers are too large to fit into thelocal scratchpad, they fall back onto an external L2 cache andDRAM which are shared with CPUs and other acceleratorson the system-on-chip (SoC). A host CPU tiles such layers tocompute the full outputs. The baseline accelerator producedby Gemmini incorporates peripheral circuitry that enablesthe execution of ReLU and max-pool operations, alongsideinteger-float multipliers that facilitate the scaling of 32-bitpartial sums into 8-bit inputs for the subsequent layer. Nativesupport for these operations is important, as it eliminates thenecessity of offloading such operations to the host CPUs,thereby circumventing the costly transfers of activations be-tween DRAM or outer caches and the local scratchpad.Finally, note that this baseline CNN accelerator does notinclude any Transformer-specific features. In particular, thereis no support for nonlinear normalization operations suchas GELU, Softmax, or LayerNorm. Therefore, although itachieves real-time or near-real-time performance on end-to-end CNN workloads, the performance on Transformer work-loads such as BERT is severely limited [19] as will bediscussed in more detail.2)Performance Bottlenecks :Our observation has revealedthat the baseline CNN accelerator, when deployed for Trans-former inference, exhibits <1% utilization of its functionalunits. Although individual matmuls exhibit 74% utilization, theperformance is severely impeded by the nonlinear operationsthat need to be executed on the host CPU as they are notnatively supported by the accelerator. This is further exacer-bated by the fact that the nonlinear operations necessitate theuse of floating-point arithmetic. Not only it is less energy andlatency efficient than their integer counterparts [22], it alsoentails dequantization and re-quantization of the activations.These overheads account for 96% of the overall executiontime (Fig. 2). Given that the majority of FLOPs in Trans-former inference are matmuls, the time spent on the nonlinearoperations in the baseline accelerator is far from the theoreticaloptimal, unless further optimizations are implemented.In contrast to the convolutions in CNNs, which exhibit higharithmetic intensity, Transformers mostly comprise matmuls,often with small and/or rectangular matrices, which translateto lower arithmetic intensities and different optimal tilingstrategies. This indicates that the memory hierarchy and mem-ory bandwidth of our baseline CNN accelerator need to berecalibrated for more efficient Transformer inference.3)Memory Configuration Re-adjustment :We have ob-served that the performance of BERT matmul operationscan be significantly improved by adjusting the sizes of theinput/weight scratchpad and the partial sum accumulator.Specifically, we have found that larger accumulators withhigher output-reuse are more suitable for several matmuls inTransformers, such as the query ×key matmuls, which havel×loutput activation matrices which can be much larger thanthel×d/h input matrices for l,d, and hsequence length,hidden dimension, and number of heads, respectively. Basedon this observation, we have modified the CNN-optimizedmemory configuration of our baseline accelerator by reducingthe size of the scratchpad from 256 kB to 64 kB, andincreasing the size of the accumulator from 64 kB to 256 kB.Importantly, these changes do not result in an increase in thetotal SRAM capacity or the total area; however, they result ina substantial 36% reduction in total matmul latency.4)Hardware-Software Co-Design :To alleviate the over-head incurred by runtime quantization and dequantization, aswell as the offloading of nonlinear operations to the CPU, wehave transitioned our baseline Transformer workload from anaive BERT implementation, where only matmuls are quan-tized, to an integer-only BERT variant known as I-BERT [26].I-BERT substitutes floating-point nonlinear operations withinteger polynomial approximations, which can be implementedfaster and more efficiently in specialized accelerators. Toincorporate I-BERT, we add new integer implementations ofI-BERT’s GELU, LayerNorm, and Softmax variants to our2Matmul1%Softmax19%LayerNorm4%Resadd11%De/Quantization49%GELU10%Matmul+GELU87%Softmax3%LayerNorm7%Resadd4%128 256 512Sequence Length0255075100Percentage of Latency (%)Matmul+GELU LayerNorm Resadd SoftmaxFig. 2: The time breakdown of a BERT inference with a sequence-length of 512, when running on (Left) the baseline CNNaccelerator, and (Middle) the accelerator with I-BERT’s hardware/software features incorporated. (Right) The time breakdownwith different sequence lengths after the change. For all sequence lengths, the total execution time is dominated by matmuls.baseline CNN accelerator. The 32-bit matmul results residingin the accumulator are fed into a newly added “normalizationunit” which computes reduction operations (e.g. sum, sum-of-square, max, etc.) which are used by LayerNorm and Softmax.Multiple passes of accumulator reads are required to computeall the reductions in these operations. Subsequentially, the mat-mul results in the accumulator undergo a final read operationto be fed into a set of 16 activation units, which computeI-BERT’s non-linear variants in parallel.With these new features, overall end-to-end BERT inferenceperformance improved by 39.6 ×over the baseline acceler-ator’s initial performance. As Fig. 2 illustrates, the compu-tational bottleneck once again became the matmuls ratherthan normalization or activation functions. Quantization anddequantization no longer become necessary and GELU canbe trivially fused with the preceding matmuls, so that theybecome one pipelined operation. When synthesized with theASAP7 PDK [13], the new hardware units increased the totalarea consumption of the accelerator by only 14%, and theGELU, LayerNorm, and Softmax operations increased thepower consumption of a BERT inference by only 9.3%.III. S CHEDULING OPTIMIZATIONIn Sec. II, we have demonstrated that the nonlinear opera-tions in Transformers introduce challenges to efficient acceler-ator design. We further find that these operations present non-trivial challenges to the scheduling problem as well. In thissection, we provide a brief overview of those challenges. Formore details, please refer to Section 5 of our full paper [27].Generally in DNN scheduling, it is an enticing strategyto fuse relatively high-arithmetic-intensity matmuls with thefollowing low-arithmetic-intensity normalization operations.For example, execution schedulers for CNN-type acceleratorsoften fuse convolutions with ReLU or max-pool operations.This strategy is especially applicable in the case of quantizedworkloads, where partial sums awaiting normalization areoften of higher bitwidth than the final normalized outputs.Similarly, for Transformer encoders, we could overlap theexecution of normalization operations (LayerNorm and Soft-max) with their preceding matmuls. However, this strategymay require hardware/software changes. First in the case ofDNN accelerators like Gemmini, additional hardware supportfor directly accessing partial sums by normalization operationunits may be required. Second, appropriate constraints on theNon-Fused Fusion-Optimized012345Latency (Cycles, 1e6)BERT-Base MHA Latency BreakdownQ x K Softmax Wout proj. LNNon-Fused Fusion-Optimized0123456Latency (Cycles, 1e6)BERT-Base FFN Latency BreakdownW2 proj. LayerNormFig. 3: (Left) Impact of fusion-optimized scheduling for MHAexecution. Hiding the Softmax latency via fusion-optimizedscheduling improves overall MHA latency by 78%, but over-lapping Woutprojection with LayerNorm can hurt total la-tency. (Right) Impact of fusion-optimized scheduling for FFNmatmul that enables latency hiding of the LayerNorm opera-tion. We observe that fusion-optimized scheduling hurts totallatency by 27%. In both cases, we assume an input sequencelength of 512 and accumulator size of 256kB.matmul execution schedule are necessary. In particular, thetiling factor size of either output dimension of the matmulmust be maximized, so that rows/columns are immediatelyready and stored at the Gemmini accumulator scratchpad forcomputing the mean and standard deviation. We refer to thisalternate scheduling approach as fusion-optimized scheduling .In Fig. 3, we take a deeper look into the performanceimplications of fusion-optimized scheduling for the BERT-Base encoder. We model the total latency of each adja-cent pair of matmul and LayerNorm/Softmax operations viaTimeloop [33] with the target hardware being the I-BERTmodified Gemmini described in Sec. II. Opportunities foroverlapping computations include: (1) the MHA query ×keymatmul and following Softmax; (2) MHA Woutprojectionand following LayerNorm; and (3) FFN W2projection andfollowing LayerNorm. The two scheduling strategies we com-pare are: (1) fusion-optimized scheduling and (2) Gemmini’sdefault heuristic-based scheduler, which greedily maximizesloop tile factors at the local SRAM level for each of the threematmul dimensions. We refer to the second, default schedulingapproach as non-fused scheduling.The left plot of Fig. 3 showcases the promises of mat-mul and non-linear operator fusion within the MHA. WithGemmini on-chip scratchpad and accumulator SRAM sizes of30.0 0.2 0.4 0.6 0.8 1.0EDP (Normalized)22.022.523.023.524.024.525.0PerplexityNAS Results: Latency vs. Normalized EDP(Scratchpad 64kB, Accumulator: 256kB)NASTrained from scratch+1 Perplexity+0.1 Perplexity20 40 60 80 100 120Latency (109 Cycles)22.523.023.524.024.525.0PerplexityNAS Results: Latency vs. Perplexity(Scratchpad 64kB, Accumulator: 256kB)NASTrained from scratch+1 Perplexity+0.1 Perplexity0 20 40 60 80 100 120Energy (103 J)22.523.023.524.024.525.0PerplexityNAS Results: Latency vs. Energy(Scratchpad 64kB, Accumulator: 256kB)NASTrained from scratch+1 Perplexity+0.1 PerplexityFig. 4: (Left) EDP-perplexity, (Middle) Latency-perplexity, and (Right) Energy-perplexity plots of the Transformer architecturesfound via evolutionary search on our Gemmini hardware configuration. Lower perplexity indicates better performance of thetrained models. For better comparison, we additionally plot lines to illustrate +0.1 and +1 point perplexity degradation.256KB, we observe that it is advantageous to fuse query ×key matmuls with Softmax for each attention head and therebyhide the relatively high latency of executing the Softmaxoperation. Assuming an input sequence length of 512, theSoftmax latency is significant compared to the matmul, takingup around 78% of the total cycles and contributes greatly tothe total latency.On the other hand, the right plot of Fig. 3 shows theresults on matmul and LayerNorm overlapping in the FFN W2projection. Here, we observe that fusion-optimized schedulingworsens total latency by 27%. When scheduling the FFN, wefind that at the BERT-Base scale, it is consistently favorable tooverlap the MHA query ×key with the ensuing Softmax butconsistently disadvantageous to chain the FFN W2projectionmatmul with LayerNorm. This is in contrast with previousstudies on GPU kernel fusion for Transformers [11], [35],and it highlights how scheduling for Transformer matmulsbecomes more complex when targeting different styles ofcustom hardware designs, including the Gemmini accelerator.IV. N EURAL ARCHITECTURE OPTIMIZATIONAnother important avenue in full stack optimization ofDNNs is optimizing DNN architectures and tailoring them forspecific hardware platforms. However, the exponential searchspace of DNN architectures often makes it challenging tofind an optimal architecture, even without considering theunderlying hardware. To address this issue, automated neuralarchitecture search (NAS) methods have been proposed toadapt DNNs for given hardware constraints. In this regard, weapply hardware-aware NAS to search for Transformer archi-tectures that are optimal on the Gemmini-driven acceleratorwith better efficient and performance trade-offs. For a moredetailed overview of hardware-aware NAS and its applicationto the Transformer architectures, please refer to Section 6 ofour full paper [27].1)Experiment Setup :As a baseline architecture, we use a6-layer Transformer architecture with all other model config-urations remaining the same as BERT-Base. We use languagemodeling on the WikiText-2 [31] as a training objective. Toevaluate the model performance, we measured perplexity onthe validation examples, where lower scores indicate betterperformance. The stand-alone baseline model was trained for50 epochs with the Adam optimizer and a linear learning ratescheduling with a peak learning rate of {5,2,1,0.5} ×10−5.We use a sequence length of 512 and a batch size of 16.For NAS, we adopt the BigNAS [47] strategy to train asupernet using the same training hyperparameters as the stand-alone training. The NAS search space is comprised of variouscombinations of the number of layers in {3,4,5,6}, numberof heads in {4,6,8,10,12}, hidden dimension in [384,768],and FFN dimension in [768,3072] . Subsequently, we useevolutionary search for 40 iterations with a population size of40 and mutation probability of 0.2 to search optimal subnetsout of the fully trained supernet. After every iteration, only thesubnets that are Pareto-optimal in EDP (energy-delay-product)and perplexity are retained. To measure the hardware cost, weuse a lookup table-based method for quickly assessing thelatency and energy consumption of each subnet on the targethardware, instead of using time-consuming RTL simulation.The lookup table contains Timeloop [33] simulated latency andenergy numbers for each operation, which are then summedup to estimate the end-to-end values for the entire subnets.After the evolutionary search, the Pareto-optimal subnets arethen evaluated with an RTL simulator to obtain a more preciseestimation of the latency. For the energy measure, we continueto use the numbers from Timeloop. For the target hardware,we use Gemmini with the optimizations applied in Sec. II.2)Experiment Results :We show the NAS Pareto-frontierresults for EDP, latency and energy in Fig. 4 (blue curves)where each point corresponds to a different Transformerarchitecture found from the evolutionary search algorithm. Ad-ditionally, we plot the stand-alone trained baseline Transformermodel trained as a reference ( ×mark). As can be seen inthe EDP plot (Fig. 4 Left), the NAS framework allows us toobtain multiple Transformer architectures with better hardwarecost to perplexity trade-offs. That is, it finds architectures withsimilar or even better perplexity, as compared to the baselinewith smaller hardware costs.Fig. 4 (Middle and Right) further illustrates latency andenergy separately. As one can see, it is possible to attain a1.4×reduction in latency versus the baseline Transformerwith 0.1 point perplexity degradation. If one could tolerate1 point degradation in perplexity, latency can be reducedby 2.4 ×. With regards to energy, one can attain a 1.6 ×improvement considering 0.1 point perplexity degradation, and4.4×when allowing perplexity degradation of 1 point. Taking4both together, it is possible to reduce EDP by 2.2 ×withjust 0.1 point perplexity degradation, and 10.6 ×with 1 pointperplexity degradation. These examples illustrate the power ofco-design in allowing practitioners to choose a combinationthat best matches their needs. It is important to note that thisrepresents a single run of our co-design methodology on aspecific hardware platform, and results may vary dependingon the target hardware and optimization goals.V. C ONCLUSIONWhile Transformer models have shown significant per-formance improvements, their growing size and run-timecomplexity present a critical challenge in efficient inference.In this work, we have demonstrated the benefits of a fullstack approach by leveraging the advantages of co-design andco-optimization techniques across the stack. We adapted aCNN-oriented accelerator to efficient Transformer inferenceby supporting integer-only nonlinear operations [26] and re-balancing the memory hierarchy, which yielded a 39.6 ×latency reduction. We also applied NAS to search for Pareto-optimal Transformer architectures given the tradeoff betweenEDP and perplexity, leading to a 10.6 ×EDP reduction withminimal performance drop. Altogether, we have exhibited a88.7×latency improvement without a noticeable performancedrop compared to a naive implementation without full-stackconsiderations. We have also demonstrated that unlike inCNNs, nonlinear operations in Transformers require carefulconsideration when performing operator fusion when targetingcustom accelerators, e.g. systolic-array based architectures. Weexpect more improvement when we take this into consider-ation when designing the end-to-end full stack optimizationpipeline. We refer interested readers to our full paper [27],which includes (1) a comprehensive analysis of Transformerworkloads, (2) an extensive survey of the current hardwareand software solutions on efficient Transformer inference,and (3) case studies to quantify the advantages of co-designand co-optimization techniques across the stack on full-stackTransformer inference.ACKNOWLEDGEMENTSWe acknowledge gracious support from Meta and in partic-ular Michael Anderson, Satish Nadathur and Summer Deng,as well as Google Cloud, Google TRC team, and specificallyJonathan Caton, Prof. David Patterson, and Jing Li. Prof.Keutzer’s lab is sponsored by Intel corporation, Intel VLABteam, Intel One-API center of excellence, as well as fundingthrough BDD and BAIR. Sehoon Kim would like to acknowl-edge the support from Korea Foundation for Advanced Studies(KFAS). Amir Gholami was supported through funding fromSamsung SAIT. Michael W. Mahoney would also like toacknowledge a J. P. Morgan Chase Faculty Research Awardas well as the DOE, NSF, and ONR. Our conclusions do notnecessarily reflect the position or the policy of our sponsors,and no official endorsement should be inferred.REFERENCES[1] “Edge TPU,” https://cloud.google.com/edge-tpu/, accessed: 2018-12-05.[2] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,S. Ghemawat, G. Irving, M. Isard et al. , “{TensorFlow }: a system for{Large-Scale }machine learning,” in USENIX Symposium on OperatingSystems Design and Implementation (OSDI) , 2016.[3] D. Abts, J. Kim, G. Kimmell, M. Boyd, K. Kang, S. Parmar, A. Ling,A. Bitar, I. Ahmed, and J. Ross, “The groq software-defined scale-out tensor streaming multiprocessor: From chips-to-systems architecturaloverview,” in IEEE Hot Chips Symposium , 2022, pp. 1–69.[4] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al. , “Language modelsare few-shot learners,” arXiv preprint arXiv:2005.14165 , 2020.[5] T. Chen, M. Li, Y . Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu,C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machinelearning library for heterogeneous distributed systems,” arXiv preprintarXiv:1512.01274 , 2015.[6] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan,L. Wang, Y . Hu, L. Ceze et al. , “{TVM}: An automated end-to-endoptimizing compiler for deep learning,” in 13th{USENIX }Symposiumon Operating Systems Design and Implementation ( {OSDI}18), 2018,pp. 578–594.[7] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y . Chen, and O. Temam,“Diannao: A small-footprint high-throughput accelerator for ubiquitousmachine-learning,” in Proceedings of the 19th International Conferenceon Architectural Support for Programming Languages and OperatingSystems , ser. ASPLOS ’14. New York, NY , USA: ACM, 2014, pp.269–284.[8] Y .-H. Chen, J. Emer, and V . Sze, “Eyeriss: A Spatial Architecturefor Energy-efficient Dataflow for Convolutional Neural Networks,” inProceedings of the International Symposium on Computer Architecture(ISCA) , 2016.[9] Y .-H. Chen, T.-J. Yang, J. Emer, and V . Sze, “Eyeriss v2: A flexibleaccelerator for emerging deep neural networks on mobile devices,” IEEEJournal on Emerging and Selected Topics in Circuits and Systems , 2019.[10] Y . Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen,Z. Xu, N. Sun, and O. Temam, “DaDianNao: A Machine-learningSupercomputer,” in Proceedings of the International Symposium onMicroarchitecture (MICRO) , 2014.[11] J. Choi, H. Li, B. Kim, S. Hwang, and J. H. Ahn, “Accelerating trans-former networks through recomposing softmax layers,” in InternationalSymposium on Workload Characterization (IISWC) , 2021.[12] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts,P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al. , “Palm: Scalinglanguage modeling with pathways,” arXiv preprint arXiv:2204.02311 ,2022.[13] L. Clark, V . Vashishtha, L. Shifren, A. Gujia, S. Sinha, B. Cline,C. Ramamurthya, and G. Yeric, “ASAP7: A 7-nm FinFET PredictiveProcess Design Kit,” Microelectronics Journal , 2016.[14] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-trainingof deep bidirectional transformers for language understanding,” arXivpreprint arXiv:1810.04805 , 2018.[15] N. Du, Y . Huang, A. M. Dai, S. Tong, D. Lepikhin, Y . Xu, M. Krikun,Y . Zhou, A. W. Yu, O. Firat et al. , “Glam: Efficient scaling oflanguage models with mixture-of-experts,” in International Conferenceon Machine Learning . PMLR, 2022, pp. 5547–5569.[16] Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y . Chen,and O. Temam, “Shidiannao: Shifting vision processing closer to thesensor,” in 2015 ACM/IEEE 42nd Annual International Symposium onComputer Architecture (ISCA) , 2015, pp. 92–104.[17] H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger, “Neural Accel-eration for General-Purpose Approximate Programs,” in Proceedings ofthe International Symposium on Microarchitecture (MICRO) , 2012.[18] M. Gao, J. Pu, X. Yang, M. Horowitz, and C. Kozyrakis, “Tetris:Scalable and Efficient Neural Network Acceleration with 3D Memory,”inProceedings of the International Conference on Architectural Supportfor Programming Languages and Operation Systems (ASPLOS) , 2017.[19] H. Genc, S. Kim, A. Amid, A. Haj-Ali, V . Iyer, P. Prakash, J. Zhao,D. Grubb, H. Liew, H. Mao, A. Ou, C. Schmidt, S. Steffl, J. Wright,I. Stoica, J. Ragan-Kelley, K. Asanovic, B. Nikolic, and Y . S. Shao,“Gemmini: Enabling systematic deep-learning architecture evaluationvia full-stack integration,” in Proceedings of the 58th Annual DesignAutomation Conference (DAC) , 2021.5[20] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, andW. J. Dally, “Eie: Efficient inference engine on compressed deep neuralnetwork,” SIGARCH Comput. Archit. News , vol. 44, no. 3, Jun. 2016.[21] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai,E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clarket al. , “Training compute-optimal large language models,” arXiv preprintarXiv:2203.15556 , 2022.[22] M. Horowitz, “1.1 computing’s energy problem (and what we can doabout it),” in 2014 IEEE International Solid-State Circuits ConferenceDigest of Technical Papers (ISSCC) , 2014, pp. 10–14.[23] J. Hruska, “New movidius myriad x vpu packs a custom neural computeengine,” 2017.[24] Y . Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick,S. Guadarrama, and T. Darrell, “Caffe: Convolutional Architecture forFast Feature Embedding,” CoRR , vol. abs/1408.5093, 2014.[25] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa,S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao,C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V . Ghaem-maghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg,J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan,H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon,J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean,A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami,R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps,J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov,M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson,B. Tian, H. Toma, E. Tuttle, V . Vasudevan, R. Walter, W. Wang,E. Wilcox, and D. H. Yoon, “In-datacenter performance analysis of atensor processing unit,” in 2017 ACM/IEEE 44th Annual InternationalSymposium on Computer Architecture (ISCA) , June 2017, pp. 1–12.[26] S. Kim, A. Gholami, Z. Yao, M. W. Mahoney, and K. Keutzer, “I-bert:Integer-only bert quantization,” in International conference on machinelearning . PMLR, 2021, pp. 5506–5518.[27] S. Kim, C. Hooper, T. Wattanawong, M. Kang, R. Yan, H. Genc,G. Dinh, Q. Huang, K. Keutzer, M. W. Mahoney et al. , “Fullstack optimization of transformer inference: a survey,” arXiv preprintarXiv:2302.14017 , 2023.[28] S. Knowles, “Graphcore,” in IEEE Hot Chips Symposium , 2021, pp.1–25.[29] H. Liao, J. Tu, J. Xia, and X. Zhou, “Davinci: A scalable architecturefor neural network computing.” in IEEE Hot Chips Symposium , 2019,pp. 1–44.[30] S. Lie, “Cerebras architecture deep dive: First look inside the hw/swco-design for deep learning: Cerebras systems,” in IEEE Hot ChipsSymposium , 2022, pp. 1–34.[31] S. Merity, C. Xiong, J. Bradbury, and R. Socher, “Pointer sentinelmixture models,” 2016.[32] NVIDIA. (2018) TensorRT: https://developer.nvidia.com/tensorrt.[33] A. Parashar, P. Raina, Y . S. Shao, Y .-H. Chen, V . A. Ying, A. Mukkara,R. Venkatesan, B. Khailany, S. W. Keckler, and J. Emer, “Timeloop: Asystematic approach to dnn accelerator evaluation,” in 2019 IEEE inter-national symposium on performance analysis of systems and software(ISPASS) . IEEE, 2019, pp. 304–315.[34] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al. , “Pytorch: Animperative style, high-performance deep learning library,” Advances inneural information processing systems , vol. 32, 2019.[35] S. Pati, S. Aga, N. Jayasena, and M. D. Sinclair, “Demystifying bert:Implications for accelerator design,” in International Symposium onWorkload Characterization (IISWC) , 2021.[36] J. Pei, L. Deng, S. Song, M. Zhao, Y . Zhang, S. Wu, G. Wang, Z. Zou,Z. Wu, W. He et al. , “Towards artificial general intelligence with hybridtianjic chip architecture,” Nature , vol. 572, no. 7767, pp. 106–111, 2019.[37] R. Prabhakar and S. Jairath, “Sambanova sn10 rdu: Accelerating soft-ware 2.0 with dataflow,” in IEEE Hot Chips Symposium , 2021, pp. 1–37.[38] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, “Improvinglanguage understanding by generative pre-training,” 2018.[39] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever,“Language models are unsupervised multitask learners,” OpenAI blog ,vol. 1, no. 8, p. 9, 2019.[40] J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song,J. Aslanides, S. Henderson, R. Ring, S. Young et al. , “Scaling languagemodels: Methods, analysis & insights from training gopher,” arXivpreprint arXiv:2112.11446 , 2021.[41] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,Y . Zhou, W. Li, and P. J. Liu, “Exploring the limits of trans-fer learning with a unified text-to-text transformer,” arXiv preprintarXiv:1910.10683 , 2019.[42] A. Sabne, “Xla: Compiling machine learning for peak performance,”2020.[43] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili ́c, D. Hesslow,R. Castagn ́e, A. S. Luccioni, F. Yvon, M. Gall ́eet al. , “Bloom: A 176b-parameter open-access multilingual language model,” arXiv preprintarXiv:2211.05100 , 2022.[44] F. Sijstermans, “The NVIDIA Deep Learning Accelerator,” in Hot Chips ,2018.[45] S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper,Z. Liu, S. Prabhumoye, G. Zerveas, V . Korthikanti et al. , “Usingdeepspeed and megatron to train megatron-turing nlg 530b, a large-scalegenerative language model,” arXiv preprint arXiv:2201.11990 , 2022.[46] E. Talpes, D. D. Sarma, G. Venkataramanan, P. Bannon, B. McGee,B. Floering, A. Jalote, C. Hsiong, S. Arora, A. Gorti et al. , “Computesolution for tesla’s full self-driving computer,” IEEE Micro , vol. 40,no. 2, pp. 25–35, 2020.[47] J. Yu, P. Jin, H. Liu, G. Bender, P.-J. Kindermans, M. Tan, T. Huang,X. Song, R. Pang, and Q. Le, “Bignas: Scaling up neural architecturesearch with big single-stage models,” in European Conference onComputer Vision . Springer, 2020, pp. 702–717.6 |
gb6VM_pTd5E | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023ParaGAN: A Cloud Training Framework forGenerative Adversarial NetworksZiji Shi∗†, Fuzhao Xue∗, Jialin Li∗, Yang You∗∗National University of Singapore†Alibaba GroupAbstract —Generative Adversarial Network (GAN) has showntremendous success in synthesizing realistic photos and videosin recent years. However, training GAN to convergence is still achallenging task that requires significant computing power andis subject to training instability. To address these challenges,we propose ParaGAN, a cloud training framework for GANoptimized from both system and numerical perspectives. Toachieve this, ParaGAN implements a congestion-aware pipelinefor latency hiding, hardware-aware layout transformation forimproved accelerator utilization, and an asynchronous updatescheme to optimize system performance. Additionally, from anumerical perspective, we introduce an asymmetric optimizationpolicy to stabilize training. Our preliminary experiments showthat ParaGAN reduces the training time of BigGAN from 15days to just 14 hours on 1024 TPUs, achieving 91% scalingefficiency. Moreover, we demonstrate that ParaGAN enables thegeneration of unprecedented high-resolution ( 1024×1024 ) imageson BigGAN.I. I NTRODUCTIONLast decade has witnessed the success of Generative Ad-versarial Networks [7], which has a wide range of applica-tions including image super resolution [13], image translation[8], [26], photo inpainting [6], [24]. However, training GANat scale remains challenging because of the computationaldemands and optimization difficulties. Unlike ConvolutionalNeural Networks (CNN) or Transformer-based architectureswhere optimization is straightforward by taking gradient de-scents on a single model , there are two sub-networks tooptimize in GAN, namely generator and discriminator. Thegenerator samples from the noise and produces a fake sampleas close to the real sample as possible, and the discriminatorevaluates the generated sample. The generator aims to foolthe discriminator, and the discriminator will try to identifythe fake images from the real ones. Since the two componentsare optimized for two contradicting goals, it has been observedthat GANs are difficult to converge. Therefore, to speed-up theGAN training at large-scale, we need a framework optimizedon both system and numerical perspective.Due to the difficulty of optimizing GAN, many state-of-the-art GAN models take days or even weeks to train. For instance,BigGAN [2] took 15 days for 8x V100 GPUs to train 150ksteps. Table I summarizes the reported training time of someof the state-of-the-art GAN models. This has made it difficultto quickly reproduce, evaluate, and iterate GAN experiments.Also, current GAN frameworks usually support training withvery few nodes.0 200 400 600 800 1000Number of accelerators010000200003000040000500006000070000Images per secondThroughput of BigGAN trained with ParaGANParaGANIdeal scalingFig. 1: ParaGAN scales to 1024 TPU accelerators at 91%scaling efficiency.TABLE I: Training Time and Parameters Number for GANstrained on ImageNet 2012 dataset.Model Accelerator Training Time # ParamsSNGAN [17] 8×V100 3d 13.6h 81.44MSAGAN [25] 8×V100 10d 18.7h 81.47MBigGAN [2] 8×V100 15 d 158.42MContraGAN [10] 8×V100 5d 3.5h 160.78MProgressiveGAN1[12] 8×V100 4d 43.2MWe argue that training speed is an important yet oftenignored factor in the current GAN training landscape, andwe propose to accelerate it with distributed training. Butdistributed GAN training has several challenges. First of all,most data centers have storage nodes and compute nodesseparated for elasticity, but network congestion can happenfrom time to time, which prolongs the latency between nodesand affects training throughput. Secondly, there are usuallydifferent types of accelerators in the data center, but each ofthem has unique optimal hardware characteristics. If ignored,it can lead to the under-utilization of accelerators. Last butnot least, training GAN at scale may cause a convergenceproblem, in which the GAN loss does not converge to a stableequilibrium. Therefore, this framework has to consider bothsystem and numerical perspectives.In this work, we present ParaGAN , a distributed trainingframework that supports large-scale distributed training forhigh-resolution GAN. We identify the performance bottlenecks1Fig. 2: Typical GAN architecture.when training at scale and optimize them for efficiency. Tostabilize the training process, ParaGAN comes up with anasynchronous update scheme and asymmetric optimizationpolicy. ParaGAN has a simple interface for building new GANarchitecture, and it supports CPU, GPU, and TPU.The main contributions of ParaGAN include:•We design and implement a scalable distributed trainingframework for GAN with optimizations on both systemand numerical perspectives. With ParaGAN, the trainingtime of BigGAN can be shortened from 15 days to14 hours with 1024 TPU accelerators at 91% scalingefficiency, as shown in Fig. 1. ParaGAN also enablesdirect photo-realistic image generation at unprecedented1024×1024 resolution, which is 4×higher than theoriginal BigGAN model.•From the system perspective, we use a congestion-awaredata pipeline and hardware-aware layout transformationto improve the accelerator utilization.•From the numerical perspective, to improve the conver-gence for distributed GAN training, we present an asyn-chronous update scheme and asymmetric optimizationpolicy.II. B ACKGROUNDAs shown in Fig. 2, a GAN consists of a generator anda discriminator. The generator generates fake data samples,while the discriminator distinguishes between the generatedsamples and real samples as accurately as possible. Thelearning problem of GANs is a minimax optimization problem.The goal of the optimization is to reach an equilibrium for atwo players problem:minGmaxDEx∼qdata (x)[logD(x)]+Ez∼p(z)[log (1−D(G(z)))]where z∈Rdzis a latent variable drawn from distributionp(z). The discriminator seeks to maximize the sum of the logprobability of correctly predicting the real and fake samples,while the generator tries to minimize it instead. Formally, theconvergence of GAN is defined as a type of Nash Equilibrium:one network does not change its loss regardless of what theother network does.Since the two networks have contradicting goals, the train-ing process of GAN is a zero-sum game and can be very un-stable. Recent works show that i) GAN may converge to pointsthat are not local minimax using gradient descent, in particularfor a non-convex game which is common [5], [9], and ii)gradient descent on GAN exhibits strong rotation around fixedpoints, which requires using very small learning rates [1], [16].Also, GANs training is sensitive to the hyperparameters andinitialization [15]. Therefore, it is observed that GANs aredifficult to optimize, and this is also the reason why it takesa long time to train them.There are some existing GAN libraries [4], [11], [14],[15] to train state-of-the-art GANs. They provide standardizedbuilding blocks like network backbone and evaluation metrics,making it easy to build new models. However, they focusless on the system performance, and training GAN still takesdays if not weeks. [18] benchmarks the performance of thevarious GANs within different network-related applications,[3], [22], [23] propose GAN-optimized hardware architectures.Different from the prior works, we aim to build a system thatcan be run on the public cloud using commodity accelerators.If the training process can be massively paralleled, the GANcommunity will benefit from it.In ParaGAN, we adopt a co-designed approach: on thesystem level, we identify that the performance bottlenecks arerooted in network congestion and low accelerator utilizationwhen training on the cloud, and ParaGAN implements acongestion-aware data pipeline and hardware-aware layouttransformation to mitigate the issues; on the optimizationlevel, we observe that it is beneficial to decouple the trainingof generator and discriminator, and ParaGAN proposes anasynchronous update scheme and an asymmetric optimizationpolicy.III. D ESIGN AND PROTOTYPICAL IMPLEMENTATIONSIn this section, we will give an overview and discussthe design decisions of ParaGAN. We recognize that, thescalability is usually limited by the latency between nodes.Furthermore, when scaling up the GAN training, the numericalinstability problem happens more often. We divide the fol-lowing discussions into two folds and present our co-designedapproach for system throughput and training stability.A. Programming ModelThe design of ParaGAN is presented in Fig. 3. ParaGAN(blue region) is implemented on top of TensorFlow (greenregion) because TensorFlow provides the low-level APIs formodel checkpointing, evaluation, and visualization. Differentfrom TensorFlow, we provide high-level APIs for GAN whichincludes scaling manager, evaluation metrics, and commonnetwork backbones. Users of ParaGAN can import from Para-GAN or define their own components. ParaGAN then performslayout transformation and invokes TensorFlow, which convertsthe model definition into a computational graph. An optionalXLA [20] pass can be performed followed by that. After that,the training starts on the CPU host and accelerators.2Fig. 3: Overview of ParaGAN architecture.Listing 1: Inferface of ParaGANimport paragan as pgclass Generator:def model_fn(latent_var, y):# generator modelreturn outputclass Discriminator:def model_fn(x, y):# discriminator modelreturn output, out_logitscale_mgr = pg.ScalingManager(config=cfg,bs=2048, num_workers=128)g = Generator()d = Discriminator()gan = pg.Estimator(g,d)# trainfor step in cfg.max_steps:scale_mgr.train(gan)# evaluatescale_mgr.eval(metric=’fid’)We introduce a few concepts in ParaGAN:1) Scaling Manager: The scaling manager is responsiblefor tuning the hyper-parameters that need adjustment duringscaling. Users can start with the best hyper-parameters froma single worker and ParaGAN will properly scale them basedon the number of workers based on the heuristics (eg. linearscaling, cosine scaling).2) Network Backbones: It is common that one starts bybuilding upon existing GAN architectures. We also providesome popular GAN architectures as backbone, including butnot limited to:•BigGAN [2];•Deep Convolutional GAN (DCGAN) [19];•Spectral Norm GAN (SNGAN) [17]3) Evaluation Metrics: Evaluation metrics can be imple-mented differently across papers, and this can cause incon-sistency. We provide commonly used evaluation metrics in-cluding Frechet Inception Distance (FID) and Inception Score(IS).B. System OptimizationsTo satisfy the scalability requirement, we design ParaGANwith optimizations on I/O and computation.We optimize the I/O performance by building a congestion-aware data pipeline. For data centers, the compute and storagenodes are usually interconnected via Ethernet instead of high-speed InfiniBand. The network traffic between them is notalways stable since the infrastructure is shared with othertenants. This could cause problems when the training scalessince latency fluctuates when the number of workers increases.Therefore, we implement a congestion-aware data pipeline toreduce the impact of network jittering.To achieve a higher accelerator utilization, we performhardware-aware layout transformation. A data center usuallyhas multiple types of accelerators, and different acceleratorshave different architectures and preferred data layouts. Forexample, Nvidia A100 GPUs prefer half-precision data in mul-tiples of 64 and single-precision data in multiples of 32, whileprevious generations prefer 8×. For TPU v3, the preferred datadimension should be a multiple of 128. Using the preferreddata layout can increase the accelerator utilization, but it isusually up to the user to determine it. We come up with ahardware-aware layout transformation to transform the datainto an accelerator-friendly format to maximize acceleratorutilization.C. Numerical OptimizationsOne of the main contributions of ParaGAN is its use ofasymmetric training to improve the stability of GAN. Asthe number of workers increases, a larger batch size can beused to speed up the training process. However, we havenoticed that the performance of large batch training for GANis often unstable, and mode collapse occurs frequently. Thisissue arises because mode collapse is a type of GAN failurethat occurs due to a highly coupled optimization process.To address this problem, ParaGAN introduces an asymmetricoptimization policy and asynchronous update scheme, whichhelp to decouple the optimization process and prevent modecollapse.IV. I MPLEMENTATIONTo start, we conducted a profiling of BigGAN training usingnative TensorFlow [15] and the results are shown in Figure3Fig. 4: Operator usage profile when training at scale.4. As we scaled up the cluster size from 8 to 1024 TPUworkers, we observed a significant increase in idle time due tothe higher communication overhead. Nevertheless, convolutionoperations continued to take up the majority of the executiontime, which suggests that training GAN is a compute-boundtask. Therefore, our focus for achieving scalability in Para-GAN is on maximizing the utilization of accelerators.To achieve this goal, we use congestion-aware data pipelin-ing to reduce data pipeline latency, hardware-aware layouttransformation to increase accelerator utilization, and mixed-precision training with bfloat16 for reduced memory.A. Congestion-Aware Data PipeliningNetwork jittering can have a significant impact on trainingthroughput due to the gradient synchronization stage, where allworkers synchronize the gradient at the end of each step, andthe time taken to complete this step depends on the slowestworker.Although both TensorFlow and PyTorch implements datapipelines to hide the data loading latency, when severe networkjittering happens, data loading and pre-processing takes muchlonger than usual, and it can be a bottleneck in large-scaledistributed training. As shown in Fig. 4, when the number ofworkers scales from 8 to 1024, it spends 13.6% more time onidling, while data outfeeding time stays close. This indicatesthat the accelerators are busy waiting for data infeed andgradient synchronization, which leads to reduced utilization.ParaGAN dynamically adjusts the number of processes andpre-processing buffer size based on the high variance network.It achieves this by using a sliding window to monitor networklatency during runtime. If the current latency exceeds thethreshold λover the window, the system increases the numberof threads and buffer for pre-fetching and pre-processing. Oncethe latency falls below λ, the system releases the resourcesfor pre-processing. This may result in an increase in sharedmemory usage, but shared memory is not typically a bottleneckand is often underutilized.B. Hardware-Aware Layout TransformationZero-padding is used in GAN when the input cannot fit intothe specified convolution dimension. For example, a matrix of100×100will need 14 zeros padded around it to run on a 128×128matrix unit. However, zero-padding hinders the acceleratorperformance because memory is wasted by padding, leadingto a lower accelerator and memory utilization rate.We implement ParaGAN by making sure both the batchsize and feature dimensions are multiples of 128 wheneversuitable. In NCHW (batch size x number of channels x heightx width) format, we implemented ParaGAN such that N/H/Ware multiple of 128 on the host side so that the acceleratormemory can be efficiently utilized.On top of the feature dimensions, ParaGAN also seeksopportunities to batch data, in order to combine the inter-mediate result to be a multiple of optimal layout dimensionwithout affecting the results. Such opportunities can be foundatreshape andmatmul operators. For instance, if two inputmatrices are to multiply the same weight, we can concatenatethe two input matrices first before the matrix multiplication.In some senses, this is similar to operator fusion, but the keydifference here is that ParaGAN’s layout transformation isdependent on the hardware, so that the fused result can confineto the optimal layout.V. P RELIMINARY EVALUATIONIn this section, we aim to answer the following questions:1) how is the performance of ParaGAN compared to otherframeworks? 2) how much does each part of the systemcontribute to the overall performance? And 3) what are theeffects of the numerical optimizations on convergence?In this section, we first evaluate the end-to-end performanceof ParaGAN using three metrics:•steps per second measures the number of steps ParaGANcan train per second;•images per second measures the throughput of ParaGANtrained with ImageNet 2012 dataset;•time to solution measures the time it takes to reach 150ksteps on ImageNet at 128×128resolution.We first compare ParaGAN with other popular frameworksfor end-to-end performance (Sec. V-B), and evaluate thescaling efficiency for ParaGAN (Sec. V-C).A. Experiment SetupWe choose BigGAN on ImageNet ILSVRC 2012 dataset asbenchmark, because BigGAN has a profound impact on thehigh-resolution image generation, and it has a high compu-tational requirement (Table I). On the other hand, ImageNetcontains a good variety of classes (1000 classes), and it isusually challenging to train on. For the hardware backend, wefirst compare the performance of different backends, then wechoose TPU due to accelerator availability reason.While we use BigGAN to benchmark ParaGAN, our frame-work is generally applicable to other GAN architectures anddataset, and it is not tightly coupled with any specific accel-erator backends.4StudioGAN-8GPU TF-8GPU ParaGAN-8GPU ParaGAN-8TPU0100200300400500600Image per secondThroughput of BigGAN trained with different hardware and frameworkFig. 5: Throughput of different systems and hardware combi-nations.B. Framework-level ExperimentsIn Figure 5, we present a comparison of ParaGAN withStudioGAN [11] and native TensorFlow [15] in terms ofGPU performance. In each experiment, we train BigGAN onImageNet at a resolution of 128x128. We utilize eight TeslaV100 GPUs for all settings, except for ParaGAN-8TPU.We observe that ParaGAN outperforms both the nativeTensorFlow and StudioGAN with 8 GPUs. We conjecture thatthe performance gain on the GPU setting mainly attributes tothe use of congestion-aware data pipeline and hardware-awarelayout transformations. We also observe that the performancegap is further pronounced when switching to the TPU as theaccelerator. Due to availability reasons, the following sectionsmainly focus on the TPU as the accelerator.C. Scaling ExperimentsWe will discuss the strong and weak scaling results in thissection. In the strong scaling experiments, we keep the totalworkload constant and vary the number of workers to examinethe speedup on time-to-solution. Whereas in the weak scalingexperiments, we keep the per worker workload ( batch size perworker ) constant and increase the number of workers.1) Strong Scaling: For strong scaling experiments, we fixthe total batch size to be 512 and train for 150k steps astarget workload. Note that in order to be consistent with otherexperiments, we train on BigGAN at 128×128resolution, withis smaller than the model trained in Fig. 1. We aim to studythe effect of decreased per-worker workload when scaling.As can be seen from Fig 6, with an increasing number ofworkers, the time to solution decreases from over 30 hours to3 hours. We note that the scaling efficiency drops from 128 to512 workers (64 to 256 TPU chips). This is because as we fixthe global batch size to be 512, the per worker workload dropsfrom 4 samples to 1 sample per batch, which under-utilizesthe TPU. Thus, the time spent on communication overweightsthe computation when the batch size is too small. This is alsoverified by Fig 6, where the image per second barely improveswith an increasing number of accelerator workers. However,16 32 64 128 256Number of TPU Chips2000300040005000Image per second(a) Image per second.16 32 64 128 256Number of TPU Chips15202530Time to solution(hours) (b) Time-to-solution.Fig. 6: Strong scaling with ParaGAN. Each TPU chip has twoaccelerators.832 64128 256 5121024Number of accelerators0.00.51.01.5Steps per second(a) Step per second.832 64128 256 5121024Number of accelerators0200004000060000Image per second (b) Image per second.Fig. 7: Weak scaling with ParaGAN.when the workload can saturate the accelerator, the scalingefficiency can be near optimal as shown in Fig. 1.2) Weak Scaling: In the weak scaling experiments, we fixedthe batch size per worker and evaluate the performance ofour framework by increasing the number of workers. Firstly,we find the largest batch size for a single accelerator thatdoes not lead to out-of-memory error. Then, we use the batchsize for each worker, therefore, the amount of workload iskept identical across workers. The weak scaling experimentsexamine how well ParaGAN can handle communication withan increasing number of workers. As can be seen in Fig. 7,the trend in step-per-second is relatively steady even whenusing 1024 workers. It shows that ParaGAN can scale outwell to a large number of workers while keeping a high scalingefficiency. It is worth noting that, as the number of workersscales, the system will be more likely to suffer from networkjittering and congestion. A relatively flat curve (Fig. 7a)indicates that the data pipeline optimization in ParaGAN iseffective in case of congestion.D. Accelerator UtilizationThe basic computing unit of TPU is MXU (matrix-multiplyunit), and a higher utilization is more desirable. We comparethe accelerator utilization of BigGAN 128x128 on baseline[15] and ParaGAN. Fig. 8 shows that ParaGAN clearly out-performs native implementation with higher MXU utilizationacross different TPU configurations. We wish to highlight thateven 2% improvements can be important when scaling tothousands of workers.It is also worth noting that, with an increasing numberof accelerators, the amount of communication increases, butParaGAN is able to maintain a relatively higher utilization thannative implementation, and the gap is increasing. It indicates54 16 32 64 128 256 512Number of TPUs30323436384042Mean MXU utilization (%)FrameworkNative TFParaGANMXU utilization of TPUFig. 8: Accelerator utilization of BigGAN trained with nativeTensorFlow and ParaGAN.0 20 40 60 80 100 120 140Training steps (x1000)20406080100120Infeed latency (ms)tf.dataParaGANFig. 9: Data pipeline latency.that computation still dominates the training time as comparedto native TensorFlow, and ParaGAN is able to keep up withscaling out.Data pipeline provides 8-15% performance improvementover the baseline. When the number of accelerators increases,network jitter caused by congestion is more likely to happen,making data loading the slowest link in the training process. InParaGAN, we try to saturate the accelerators by dynamicallyadjusting the buffer/CPU budget for the data pipeline. Thisis generally applicable, and ParaGAN enables this feature bydefault.We compare the performance of our congestion-awarepipeline with TensorFlow’s implementation. To ensure theresults are comparable, they are run at the same time on thesame type of machine with the same dataset directory, andlatency is measured at the time taken to extract and transforma batch of data. As shown in Fig. 9, our pipeline tuner has alower variance on latency.Layout transformation and operator fusion combinedprovides 8% additional improvement by increasing the accel-erator utilization. Considering that they both optimize on thekernel level, it is possible that we combine them into one passby integrating layout-awareness into XLA. We also believe itmay improve by using more aggressive layout transformationson intermediate result, but it might affect the convergence. Weleave it as future work.Fig. 10: Output of BigGAN at 1024×1024 resolution. Bestviewed in colour.E. Generating High-Resolution ImagesTo our knowledge, we are the first to successfully trainBigGAN at 1024×1024 resolution, which is 4×larger than theoriginal BigGAN. Training at high resolution is particularlyhard, because generator will need to use more channels anddeconvolutional layers to generate more details. It is thereforemore sensitive to hyperparameters and initialization. Differentfrom ProgressGAN [12] where they use progressive growingto train low resolution images first before increasing theresolution, we directly train it on 1024×1024 resolution, whichis more challenging, and it requires the numerical optimizationtechniques we discussed.The generated results achieves Inception Score (IS) [21] of239.3 and Fr ́echet Inception Distance (FID) of 10.6. They arepresented in Fig. 10 for visual evaluation.VI. D ISCUSSION AND FUTURE WORKParaGAN is a large-scale distributed GAN training frame-work that supports high-resolution image generation with near-linear scalability. ParaGAN is optimized with an adaptivedata pipeline, hardware-aware layout transformation, and anasynchronous update scheme for high throughput. To stabilizethe training process of high-resolution GAN, ParaGAN alsoimplements an asymmetric optimizer policy.We hope ParaGAN will advance GAN research by acceler-ating the training process. ParaGAN scales almost optimally to1024 accelerators, and it can greatly reduce the time to train aGAN model from weeks to hours. We leave it as future workto evaluate the performance of different GAN and diffusionmodel architectures on ParaGAN.REFERENCES[1] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Grae-pel, “The mechanics of n-player differentiable games,” in InternationalConference on Machine Learning . PMLR, 2018, pp. 354–363.[2] A. Brock, J. Donahue, and K. Simonyan, “Large scale GaN trainingfor high fidelity natural image synthesis,” 7th International Conferenceon Learning Representations, ICLR 2019 , 9 2019. [Online]. Available:http://arxiv.org/abs/1809.11096[3] J.-W. Chang, S. Ahn, K.-W. Kang, and S.-J. Kang, “Towards designmethodology of efficient fast algorithms for accelerating generativeadversarial networks on fpgas,” in 2020 25th Asia and South PacificDesign Automation Conference (ASP-DAC) . IEEE, 2020, pp. 283–288.6[4] M. Contributors, “MMGeneration: Openmmlab generative model tool-box and benchmark,” https://github.com/open-mmlab/mmgeneration,2021.[5] C. Daskalakis and I. Panageas, “The limit points of (optimistic) gradientdescent in min-max optimization,” Advances in Neural InformationProcessing Systems , vol. 31, 2018.[6] U. Demir and G. Unal, “Patch-based image inpainting with generativeadversarial networks,” arXiv preprint arXiv:1803.07422 , 2018.[7] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,S. Ozair, A. Courville, and Y . Bengio, “Generative adversarial networks,”arXiv preprint arXiv:1406.2661 , 2014.[8] P. Isola, J.-Y . Zhu, T. Zhou, and A. A. Efros, “Image-to-image translationwith conditional adversarial networks,” in Proceedings of the IEEEconference on computer vision and pattern recognition , 2017, pp. 1125–1134.[9] C. Jin, P. Netrapalli, and M. Jordan, “What is local optimality innonconvex-nonconcave minimax optimization?” in International Con-ference on Machine Learning . PMLR, 2020, pp. 4880–4889.[10] M. Kang and J. Park, “ContraGAN: Contrastive learning forconditional image generation,” Tech. Rep., 2020. [Online]. Available:http://arxiv.org/abs/2006.12681[11] M. Kang and J. Park, “ContraGAN: Contrastive Learning for ConditionalImage Generation,” 2020.[12] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growingof gans for improved quality, stability, and variation,” arXiv preprintarXiv:1710.10196 , 2017.[13] C. Ledig, L. Theis, F. Husz ́ar, J. Caballero, A. Cunningham, A. Acosta,A. Aitken, A. Tejani, J. Totz, Z. Wang et al. , “Photo-realistic singleimage super-resolution using a generative adversarial network,” inProceedings of the IEEE conference on computer vision and patternrecognition , 2017, pp. 4681–4690.[14] K. S. Lee and C. Town, “Mimicry: Towards the reproducibility of ganresearch,” 2020.[15] M. Lucic, K. Kurach, M. Michalski, O. Bousquet, and S. Gelly, “AreGans created equal? A large-scale study,” Tech. Rep., 2018.[16] L. Mescheder, S. Nowozin, and A. Geiger, “The numerics of GANs,”Tech. Rep., 2017. [Online]. Available: https://github.com/LMescheder/[17] T. Miyato, T. Kataoka, M. Koyama, and Y . Yoshida, “Spectral normal-ization for generative adversarial networks,” 2018.[18] H. Navidan, P. F. Moshiri, M. Nabati, R. Shahbazian, S. A. Ghorashi,V . Shah-Mansouri, and D. Windridge, “Generative adversarial networks(gans) in networking: A comprehensive survey & evaluation,” ComputerNetworks , vol. 194, p. 108149, 2021.[19] A. Radford, L. Metz, and S. Chintala, “Unsupervised representationlearning with deep convolutional generative adversarial networks,” arXivpreprint arXiv:1511.06434 , 2015.[20] A. Sabne, “Xla : Compiling machine learning for peak performance,”2020.[21] T. Salimans, I. Goodfellow, W. Zaremba, V . Cheung, A. Radford,and X. Chen, “Improved techniques for training gans,” arXiv preprintarXiv:1606.03498 , 2016.[22] A. Yazdanbakhsh, M. Brzozowski, B. Khaleghi, S. Ghodrati, K. Samadi,N. S. Kim, and H. Esmaeilzadeh, “Flexigan: An end-to-end solutionfor fpga acceleration of generative adversarial networks,” in 2018 IEEE26th Annual International Symposium on Field-Programmable CustomComputing Machines (FCCM) . IEEE, 2018, pp. 65–72.[23] A. Yazdanbakhsh, K. Samadi, N. S. Kim, and H. Esmaeilzadeh, “Ganax:A unified mimd-simd acceleration for generative adversarial networks,”in2018 ACM/IEEE 45th annual international symposium on computerarchitecture (ISCA) . IEEE, 2018, pp. 650–661.[24] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generativeimage inpainting with contextual attention,” in Proceedings of the IEEEconference on computer vision and pattern recognition , 2018, pp. 5505–5514.[25] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, “Self-attentiongenerative adversarial networks,” 2019.[26] J.-Y . Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-imagetranslation using cycle-consistent adversarial networks,” in Proceedingsof the IEEE international conference on computer vision , 2017, pp.2223–2232.7 |
0QA2qomtW3- | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023Towards Efficient Multi-Agent Learning SystemsKailash Gogineni, Peng Wei, Tian Lan and Guru VenkataramaniThe George Washington University, Washington, DC, USAE-mail: {kailashg26, pwei, tlan, guruv }@gwu.eduAbstract —Multi-Agent Reinforcement Learning (MARL) is anincreasingly popular domain for modeling and controlling multi-ple large-scale autonomous systems. Existing multi-agent learningimplementations typically involve intensive computations in termsof training time and power requirements arising from largeobservation-action space and a huge number of training steps.Therefore, a key challenge is understanding and characterizingthe computationally intensive functions in several popular classesof MARL algorithms during their training phases. Our prelim-inary experiments reveal new insights into the key modules ofMARL algorithms that limit their adoption in real-world systems.We explore neighbor sampling strategy to improve the cachelocality and observe performance improvement ranging from26.66% (3agents) to 27.39% (12agents) for the computationallyintensive mini-batch sampling phase. Additionally, we demon-strate that improving the cache locality leads to an end-to-endtraining time reduction of 10.2%(for12agents) compared toexisting multi-agent algorithms without significant degradationin the mean reward.Index Terms —Multi-Agent Systems, Performance Analysis,Reinforcement Learning, Performance OptimizationI. I NTRODUCTIONReinforcement Learning (RL) has recently made greatprogress in many applications, including Atari games [1],aviation systems [2], and robotics [3]. Specifically, RL frame-works fit in the context of addressing problems that involvesequential-decision making where the agent needs to takeactions in an environment to maximize the cumulative rewards.In RL, the quality of state-action pairs is evaluated using areward function, and the transition to a new state depends onthe current state and action [4]. The function that determinesthe action from the state is known as a policy. The functionrepresenting the reward estimates is known as the valuefunction.Multi-agent systems [4] have shown excellent performanceamong various multi-player games [5] where there is sig-nificant sharing of observations between the agents duringtraining, and the joint actions among these agents couldaffect the environment dynamically. In MARL, several agentssimultaneously explore a common environment and performcompetitive (e.g., Predator-prey) and cooperative (e.g., Coop-erative navigation) tasks [6]. All the observations are shared inthe cooperative setting, and the training is performed centrally.In contrast, each agent aims to outperform its enemies in acompetitive setting. As a result, MARL training involves sev-eral computationally-challenging andmemory-intensive tasksthat deal with dynamically changing environments.In this paper, we performed a workload characterizationstudy to understand the performance-limiting functions onPolicy NetworkAgent N Agent 1Action selection Agent 2ActioniExperience Replay Buffer Store experiencesMini-batch sampling Mini-batch samplingUpdate CriticUpdate Actor Update( )Target QConcatenateObs + Act Update CriticUpdate Actor Update( )Target QConcatenateObs + Act Reward(1)(2)(3)SamplingUpdate All trainersEnvironmentinteractionsAgent 1 Agent N PreyLandmark 2 Landmark 1Predator 1 Predator 2 Fig. 1: Overview of our multi-agent decentralized actor, cen-tralized critic approach (Competitive environment).well-known model-free MARL frameworks [6], [7] imple-mented using actor-critic methods with state spaces that areusually very large. We analyze different MARL training phaseswhere the actor and critic networks are responsible for policyand value functions. The critic tries to learn a value functiongiven the policy from the actor, while the actor can estimatethe policy gradient based on the approximate value functionthat the critic provides.As shown in Figure 1, each agent in the environment hasits own actor network, which outputs the action of an agentgiven its observation ( Action selection ). During the mini-batchsampling phase, each agent icollects the historical transitiondata of all other agents stored within the Experience ReplayBuffer . The sampling approach enables the algorithm to reusethe transition data for updating the current policy. Each agenthas a centralized critic, which outputs the Q-value usingthe joint observation-action space of all other agents. DuringUpdate all trainers phase, both the actor and critic networksare updated after the target Q calculation andsampling phase .The main contributions of our paper are the following:•We systematically perform a hardware-software perfor-mance analysis within the training phases of Multi-agentsystems. We present key insights into the performancebottlenecks confronting several key MARL algorithmsfrom a systems perspective.•We explore a neighbor sampling strategy to improve thelocality of data access within the mini-batch samplingphase. Our preliminary experiments provide performanceimprovement ranging from 26.66% (3agents) to 27.39%(12agents) in the sampling phase training run-time.Additionally, we achieve 10.2%(12agents) end-to-end13 6 12 24 480%20%40%60%80%100%4% 4% 3% 2% 1%34%46%61%76%87%62%50%36%22%12%Number of agentsProportion of training time (%)(a) MADDPGAction Selection Update all trainers Other segments3 6 12 24 480%20%40%60%80%100%3% 3% 2% 1% 1%34%42%53%68%82%63%55%45%31%17%Number of agents(b) MASACFig. 2: Training time breakdown on Ampere Architecture RTX 3090 for the MARL workloads (MADDPG [6] & MASAC [7])in multi-agent settings. The simulated multi-agent particle environment is Predator-Prey.Action Selection Update all trainers Total time024682.03.32.82.03.73.22.04.03.42.14.33.9Computation timegrowth rate ( N×)3 to 6 agents 6 to 12 agents 12 to 24 agents 24 to 48 agentsFig. 3: Computation time growth in MARL modules aver-aged across the two MARL frameworks (MADDPG [6] &MASAC [7]).training time reduction compared to the state-of-the-artmulti-agent algorithms.II. M OTIVATIONIn multi-agent systems, the training phase is performanceand memory-intensive as the agents must collaborate andcoordinate to maximize a shared return [8]. Many real-worldapplications, such as robot fleet coordination [9] and trafficlight control [10], are modeled as multi-agent problems, butthey become intractable with the growing number of agentsdue to the intensive computations required to estimate otheragents’ policies at each state and a huge amount of neuralnetwork parameters. This limits their adoption in real-worldsystems and limits applications only to scenarios with a fewagents [11], [12]. Figure 2 shows the run-time breakdown ofthe training phase1.Update all trainers contributes to ≈35%to≈85% of the training time as the number of MARL agentsgrows from 3 to 48. This is mainly due to two reasons: 1InMARL, each agent has its own actor and critic networks.1We omit the agents interactions phase since it primarily depends onenvironment complexity.Each agent must randomly collect a batch of transitions fromall other agents to update the critic and actor networks. 2The dynamic memory requirements of observation and actionspaces also grow quadratically due to each agent having tocoordinate with other agents towards sharing their observationsand actions. Action selection phase scales linearly with thenumber of agents (Figure 3). This is because, in Actionselection , agents consider individual policies to obtain localactions. Other segments include experience collection, rewardcollection, and policy initialization, and they add a negligibleoverhead.III. B ACKGROUNDTypically, MARL settings with Nagents is defined by a setof states, S=S1×...×SN, a set of actions A=A1×...×AN.Each agent selects its action by using a policy πθi:Oi×Ai→[0,1]. The state transition ( T:S×A1×A2×...×AN)function produces the next state S′, given the current stateand actions for each agent. The reward, Ri:S×Ai→Rforeach agent is a function of global state and action of all otheragents , with the aim of maximizing its own expected returnRi=PTt=0γtrti, where γdenotes the discount factor and Tis the time horizon. For this, we use the actor-critic methodssuch as MADDPG [6], MASAC [7].MADDPG [6] is centralized training and decentralizedexecution (CTDE) algorithm mainly designed for mixed en-vironments. Each agent learns an individual policy that mapsthe observation to its action to maximize the expected return,which is approximated by the critic. MADDPG lets the criticof agent ito be trained by minimizing the loss with the targetQ-value andyiusingL(θi) = I E D[(Qi(S, A 1, ...A n)−y2i], andyi=ri+γQi(S′, A′1, ...A′n)a′j=π(o′j), where SandA1, ...A nrepresent the joint observations and actions respectively. Dis the experience replay buffer that stores the observations,actions, rewards, and new observations samples of all agents2obtained after the training episodes. The critic networks areaugmented with states and actions of all agents to reduce thevariance of policy gradients and improve performance. TheMARL framework has four networks- actor, critic, target actor,and target critic. Qiandπ(o′j)are the target networks for thestable learning of critic ( Qi) and actor networks. The targetactor estimates the next action from the policy using the stateoutput by the actor network. The target critic aggregates theoutput from the target actor to compute the target Q-values,which helps to update the critic network and assess the qualityof actions taken by agents. The target networks are created toachieve training stability. Note that the updating sequence ofnetworks in the back-propagation phase is critics, actors, thenthe target networks.Similar to MADDPG, the centralized critic is introducedin Soft Actor-Critic (SAC [7]) algorithm. MASAC uses themaximum entropy RL, in which the agents are encouraged tomaximize the exploration within the policy. MASAC assignsequal probability to nearly-optimal actions which have similarstate-action values and avoids repeatedly selecting the sameaction. This learning trick will increase the stability, policyexploration, and sample efficiency [7], [13].IV. E VALUATION SETUPBenchmark. Table I provides the behavior of selectedMulti-agent Particle Environments (MPE [6]). We profile andcharacterize two state-of-the-art MARL algorithms, MADDPGand MASAC. A two-layer ReLU MLP parameterizes the actorand critic networks with 64 units per layer, and the mini-batchsize is 1024 for sampling the transitions. In our experiments,we use Adam optimizer [14] with a learning rate of 0.01,maximum episode length as 25 (max episodes to reach theterminal state), and τ= 0.01 for updating the target networks.γis the discount factor which is set to 0.95. The size ofthe replay buffer is 1million, and the entropy coefficient forMASAC is 0.05. The network parameters are updated afterevery 100 samples are added to the replay buffer.TABLE I: Multi-agent particle environment.Environment DetailsCooperativenavigationNagents move in a cooperated manner to reach Llandmarks and the rewards encourages the agents getcloser to the landmarks.Predator-Prey Npredators work cooperatively to block the way ofMfast paced prey agents. The prey agents are envi-ronment controlled and they try to avoid the collisionwith predators.Profiling Platform. MARL algorithms are implementedwith state-of-the-art CPU-GPU compatible TensorFlow-GPU (v2.11.0). The server runs on Ubuntu Linux 20.04.5 LTSoperating system with CUDA 9.0, cuDNN 7.6.5, PCIe Ex-press® v4.0 with NCCL v2.8.4 communication library. Themachine supports Python 3.7.15, TensorFlow-Slim (v1.1.0)and OpenAI GYM (v0.10.5). All the workloads are profiledon single Nvidia GeForce RTX 3090 Ampere Architecturewith Perf [15] and NVProf to profile hardware performancecounters for performance analysis. Finally, we trained for 60Kepisodes using default hyper-parameters recommended by thealgorithms.V. E VALUATIONIn this section, we first present an overview of our MARLprofiling results. Then, we study the computationally dominantfunctions within Update all trainers :Mini-batch sampling,Target Q calculation , and Q loss & P loss and present ourresults in the competitive setting (predator-prey) to understandthe key factors limiting MARL in large-scale systems.Figure 4 shows the breakdown between the modules, Mini-batch sampling, Target Q calculation, Q loss, and P loss thatcontribute 63%, 24%, 6.5%, and 6% to the overall computationtime averaging across different workloads for 48agents.A. Mini-batch samplingOur experimental results in Figure 4 show that mini-batchsampling is the largest time-consuming phase within theUpdate All Trainers module. The behavior is also consistentwith scaling in other critical hardware performance metrics:dTLB load misses -3.9×(growth rate from 3−6agents) andcache misses -3.9×(growth rate from 3−6agents).Mini-batch sampling phase is dominated by the collectionof random samples from all other agents’ replay buffers andupdates the parameters of its actor and critic networks. Notethat the agent replay buffers are kept separate from each otherto capture their past transitions. For each time-step, agent idraws a random index set {L1, L2, ...., L K}(Kis the mini-batch size), and first selects L1to perform a memory lookupin the experience replay buffer to retrieve the correspondingtransition and store it in the individual agent buffer. Thisoperation grows as a function of the number of agents, N,since it is repeated on all Nagents. The sampling stageexhibits random memory access patterns and cannot exploitthe cache reuse due to randomness in the indices for eachagent between the iterations. In cooperative navigation ( simplespread [6]), we observe similar bottlenecks since all the agentsare trained together to reach the landmarks while avoidingcollisions with each other.B. Target Q calculationThe Target Q calculation phase is the second largest time-consuming phase within Update All Trainers (Figure 4). Inthis function, each agent performs the next action calculation,target Q next, and target Q values as a function of allother agents’ joint observation-action space. To calculate thenext action , the agent iuses its policy network to determinenext action-a’ from the next state-S’ . In this phase, eachagent’s policy network involves multiplications with input-weight matrix and additions resulting in performance impact.The obtained a’ and S’ data are aggregated and concatenatedinto a single vector in order to compute the target Q nextamongst the cooperating agents. The input space (dimension)for the Q-function increases quadratically with the numberof agents [16]. The target critic values for each agent iarecomputed using target Q next values from the target actor33 6 12 24 480%20%40%60%80%100%12%8% 6% 6% 6%11%9%8% 6% 6%18%19%21% 23% 23%59%64% 65% 65% 64%Number of agentsProportion of training time (%)(a) MADDPGMini-batch sampling Target Q calculation Q loss P loss3 6 12 24 480%20%40%60%80%100%11%7% 6% 6% 6%12%10%8% 7% 7%19%21%23% 24% 25%58%62% 63% 63% 62%Number of agents(b) MASACFig. 4: Training time breakdown on Nvidia Ampere Architecture RTX 3090 within Update all trainers on two different MARLworkloads (MADDPG & MASAC) in multi-agent settings under the Predator-Prey environment.network. We note that each agent has to read other agents’policy values; as such, for Nagents, there is N×(N−1)memory lookup operations corresponding to the next action-a’.C. Back-propagation - Q loss & P lossBack propagation stage is dominated by the executionof two networks: 1critic network computes the mean-squared error loss between the target critic and critic networks,and 2theactor network is updated by minimizing the Qvalues (computed by the critic network). This is becauseas the number of agents increases, the trainable parametersincrease, and Npolicy and Ncritic networks are built for all Nagents, which incurs extra time to update the weights for eachagent. For each update, we sample the random mini-batch oftransitions (1024 in our studies) from the replay buffer of eachagent iacross all agents and then perform gradient descent onthe critic and actor networks.VI. N EIGHBOR SAMPLING STRATEGYFrom our analysis so far, it can be concluded that the mini-batch sampling phase dominates Update all trainers whenthe number of agents scales linearly. Moreover, fetching thetransition data from the far away memory locations signifi-cantly affects the overall training time. Among all the hardwaremetrics, cache misses suffer from the worst scaling factor (atleast 3.9 ×for3-6agents). Therefore, with the support of loop-level optimization, we explore optimizations that can improvethe locality and overall MARL performance. To address thisissue, we propose a loop-level optimization approach whileaccessing the transition data in the mini-batch sampling phase.The idea of this approach is to eliminate the computationissues arising due to fetching the data from far away memorylocations based on random indices. We investigate theAlgorithm 1 Neighbor Sampling StrategyInput: Mini-batch indices MB idx; replay buffer Dwithsized; micro-batch size nOutput: Mini-batch transitions1:Initialize obst,actions t,rewards t,obsnext t,terminal state t, at time t← {∅}2:foriinMB idxdo3: α←[j|max(0, i−n)≤j < min (d, i+n+ 1), j̸=i]▷ α includes all indices in the range ( i-n) to ( i+n),excluding the current index, and also ensuring not to gobelow 0 or exceed the length of Replay buffer D4: forkinαdo5: ifk∈ D then6: obs,act,next obs,rew,done←unpack( D[k])▷Append these unpacked transition data to the corre-sponding lists obst,actions t,rewards t,obsnext t,terminal state t7: end if8: end for9: iflen(obst)≥size of ( MB idx)then10: break11:return obst,actions t,rewards t,obsnext t,terminal state t▷Return the corresponding lists converted into NumPyarrays12: end if13:end forneighbor sampling optimization in MADDPG, where wecollectively capture the neighbor transitions of an index itoenable faster data access on a given hardware. Intuitively, ateach index i, we group the neighbor indices into a singlemicro-batch and extract the data in a locality-aware memoryaccess order to efficiently sample the transitions.4Neighbor Sampling Strategy. Algorithm 1 shows how themini-batch sampling phase selects the neighboring transitionsfor a random index i. We initialize replay buffer D, micro-batch size n. The algorithm iterates over the mini-batch indicesto collect transition data for every index. We modify the loopto accumulate a micro-batch of transitions spanning a rangeofnneighbors surrounding the current index i, i.e., for everyindex i, we check if iis within the limits of replay bufferD. If so, we capture the buffer indices from i−ntoi+nbased on the micro-batch size nand return a list of neighborsα(line 3). We perform an array access for all the indices in α,and the output vectors are unpacked and stored as individualvectors in the experience replay tuple consisting of obs,act,next obs,rew,done (line 5). These individual vectors areappended to their corresponding parents lists obst,actions t,rewards t,obsnext t,terminal state t(line 6). Finally,all the parent lists which contain the transition data at time-steptare converted as vectors. The loop terminates when the mini-batch size is reached (equal to the size of MB idx) (line 9).3 6 1202040608010026.66%26.68%27.39%Number of agentsPercentage reduction(a) Reduction in mini-batch samplingphase run-time3 6 120204060801005.6%7.8%10.2%Number of agents(b) Overall reduction in MARLtraining timeFig. 5: (a) Percentage reduction in training time for the mini-batch sampling phase for 3, 6 and 12 agents (MADDPG).(b) Percentage reduction in the total training time when thenumber of agents are scaled by 2 ×for MADDPG. Theenvironment test-bed is Predator-Prey and the micro-batchsize=3Overall, our neighbor sampling optimization improves per-formance through leveraging the spatial locality and achievestraining time reduction ranging from 26.66% (3agents) to27.39% (12agents) during the computationally intensive mini-batch sampling (Figure 5). In addition, we achieve an end-to-end training time reduction of 10.2%for12agents. Whilestudying this optimization; we ensure no significant degrada-tion in the mean episode reward.VII. D ISCUSSION AND RELATED WORKHardware-Software acceleration techniques in RL have beenstudied in recent years [17]–[20]. For example, to accelerateRL training from the software standpoint, prior works haveshown that half-precision (FP16) quantization can yield signifi-cant performance benefits and improve the hardware efficiencywhile achieving adequate convergence [21]. Other relevantapproaches include QuaRL [22], where quantization is applied3 6 1205001,00021.04103.96870.39Number of agentsAverage reward(a) MADDPG3 6 1205001,00020.05105.94872.49Number of agents(b) Optimized MADDPGFig. 6: (a) Average of mean episode rewards of all theagents trained for 60,000 episodes for MADDPG. (b) Av-erage of mean episode rewards all the agents trained for60,000 episodes after the neighbor sampling optimization forMADDPG. The environment test-bed is Predator-Prey and themicro-batch size= 3.to speed up the RL training and inference phases. QuaRLexperimentally demonstrated that quantizing the policies to ≤8 bits led to substantial speedups in the training time comparedto full precision training. All of the prior works differ from ourwork as they apply quantization to single-agent RL algorithmsor neural networks. Further, we explore the neighbor samplingoptimization to improve the efficiency of mini-batch samplingphase.Prior studies, like FA3C, have focused on hardware ac-celeration in multiple parallel worker scenarios, where eachagent is controlled independently within their environmentsusing single-agent RL algorithms [18]. In contrast, we seek tosystematically understand the performance-limiting functionsin multi-agent systems, where the agents collaborate in a singleshared environment. Agents in such MARL settings usuallyhave high visibility of one another (leading to large space andaction spaces). Apart from the methods that focus on accel-erating multiple parallel worker scenarios, other approachesuse a transition data-reuse optimization to improve the cachelocality and training time [23]. The authors experimentallydemonstrated that applying the optimal prioritization schemeproposed by MAC-PO [24] on multi-agent learning problemsand repeatedly reusing the transition data with higher weightsimproves the training efficiency.In MARL settings where each agent needs to interact withits neighbor agents, especially in complex environments withlots of observations and huge action spaces, computationalbottlenecks may be alleviated using architectural primitivesimplementing selective attention [13], [25], [26]. As thenumber of agents increases, the hardware techniques suchas near-memory computing could help to perform mini-batchsampling efficiently. For the input to critic networks, multi-level data compression [27]–[29] techniques on a targetedgroup of agents may be used based on their importance inthe environment. Also, the cache misses during mini-batchsampling phase indicate competition for the LLC cache, whichmay be addressed through smart cache allocation strategies.5Other modules, such as next action calculation, environmentinteractions, and action selection phases , may also benefitfrom the custom acceleration of key modules.VIII. C ONCLUSION AND FUTURE WORKIn this work, we present an end-to-end characterization ofseveral popular Multi-Agent Reinforcement Learning algo-rithms and, in particular, explore the locality-aware neighborindexing optimization. We find that the Update all trainersdominates the training process of MARL algorithms and scalessuper linearly with the number of agents. Our experimentalanalysis presents key insights into the modules that are thedriving factors behind computational bottlenecks. We alsopropose a loop-level optimization approach for accessingtransition data in the mini-batch sampling phase. The proposalachieves performance improvement from 26.66% (3agents)to27.39% (12agents) within the mini-batch sampling phase.In future work, we will investigate various efficient samplingstrategies and design a hardware-friendly architecture to effi-ciently fetch the transitions in large-scale MARL.ACKNOWLEDGMENTThis research is based on work supported by the NationalScience Foundation under grant CCF-2114415. We would alsolike to thank the reviewers for their valuable feedback.REFERENCES[1] V . Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wier-stra, and M. Riedmiller, “Playing atari with deep reinforcement learn-ing,” arXiv preprint arXiv:1312.5602 , 2013.[2] P. Razzaghi, A. Tabrizian, W. Guo, S. Chen, A. Taye, E. Thompson,A. Bregeon, A. Baheri, and P. Wei, “A survey on reinforcement learningin aviation applications,” arXiv preprint arXiv:2211.02147 , 2022.[3] D. Wang, R. Walters, X. Zhu, and R. Platt, “Equivariant qlearning inspatial action spaces,” in Conference on Robot Learning . PMLR, 2022,pp. 1713–1723.[4] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction .MIT press, 2018.[5] K. Zhang, Z. Yang, and T. Bas ̧ar, “Multi-agent reinforcement learning:A selective overview of theories and algorithms,” Handbook of Rein-forcement Learning and Control , pp. 321–384, 2021.[6] R. Lowe, Y . I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch,“Multi-agent actor-critic for mixed cooperative-competitive environ-ments,” Advances in neural information processing systems , vol. 30,2017.[7] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochasticactor,” in International conference on machine learning . PMLR, 2018,pp. 1861–1870.[8] K. Gogineni, P. Wei, T. Lan, and G. Venkataramani, “ScalabilityBottlenecks in Multi-Agent Reinforcement Learning Systems,” arXivpreprint arXiv:2302.05007 , 2023.[9] G. Swamy, S. Reddy, S. Levine, and A. D. Dragan, “Scaled autonomy:Enabling human operators to control robot fleets,” in 2020 IEEEInternational Conference on Robotics and Automation (ICRA) . IEEE,2020, pp. 5942–5948.[10] A. L. Bazzan, “Opportunities for multiagent systems and multiagentreinforcement learning in traffic control,” Autonomous Agents and Multi-Agent Systems , vol. 18, pp. 342–375, 2009.[11] M. Zhou, Y . Chen, Y . Wen, Y . Yang, Y . Su, W. Zhang, D. Zhang, andJ. Wang, “Factorized q-learning for large-scale multi-agent systems,” inProceedings of the first international conference on distributed artificialintelligence , 2019, pp. 1–7.[12] Y . Liu, W. Wang, Y . Hu, J. Hao, X. Chen, and Y . Gao, “Multi-agentgame abstraction via graph attention neural network,” in Proceedings ofthe AAAI Conference on Artificial Intelligence , vol. 34, no. 05, 2020,pp. 7211–7218.[13] S. Iqbal and F. Sha, “Actor-attention-critic for multi-agent reinforcementlearning,” in International conference on machine learning . PMLR,2019, pp. 2961–2970.[14] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”arXiv preprint arXiv:1412.6980 , 2014.[15] V . Ramos, “Performance counters api for python,” https://pypi.org/project/performance-features/, May 2019.[16] H. U. Sheikh and L. B ̈ol ̈oni, “Multi-agent reinforcement learning forproblems with combined individual and team reward,” in 2020 Interna-tional Joint Conference on Neural Networks (IJCNN) . IEEE, 2020, pp.1–8.[17] M. Babaeizadeh, I. Frosio, S. Tyree, J. Clemons, and J. Kautz,“GA3C: GPU-based A3C for deep reinforcement learning,” CoRRabs/1611.06256 , 2016.[18] H. Cho, P. Oh, J. Park, W. Jung, and J. Lee, “Fa3c: Fpga-accelerateddeep reinforcement learning,” in Proceedings of the Twenty-FourthInternational Conference on Architectural Support for ProgrammingLanguages and Operating Systems , 2019, pp. 499–513.[19] Y . Li, I.-J. Liu, Y . Yuan, D. Chen, A. Schwing, and J. Huang, “Accel-erating distributed reinforcement learning with in-switch computing,”inProceedings of the 46th International Symposium on ComputerArchitecture , 2019, pp. 279–291.[20] A. Stooke and P. Abbeel, “Accelerated methods for deep reinforcementlearning,” arXiv preprint arXiv:1803.02811 , 2018.[21] J. Bj ̈orck, X. Chen, C. De Sa, C. P. Gomes, and K. Weinberger,“Low-precision reinforcement learning: running soft actor-critic in halfprecision,” in International Conference on Machine Learning . PMLR,2021, pp. 980–991.[22] S. Krishnan, M. Lam, S. Chitlangia, Z. Wan, G. Barth-Maron, A. Faust,and V . J. Reddi, “QuaRL: Quantization for fast and environmentallysustainable reinforcement learning,” 2022.[23] K. Gogineni, Y . Mei, P. Wei, T. Lan, and G. Venkataramani, “AccMER:Accelerating Multi-Agent Experience Replay with Cache Locality-awarePrioritization,” 2023.[24] Y . Mei, H. Zhou, T. Lan, G. Venkataramani, and P. Wei, “MAC-PO:Multi-agent experience replay via collective priority optimization,” arXivpreprint arXiv:2302.10418 , 2023.[25] A. Mahajan, M. Samvelyan, L. Mao, V . Makoviychuk, A. Garg, J. Kos-saifi, S. Whiteson, Y . Zhu, and A. Anandkumar, “Tesseract: Tensorisedactors for multi-agent reinforcement learning,” in International Confer-ence on Machine Learning . PMLR, 2021, pp. 7301–7312.[26] T. J. Ham, S. J. Jung, S. Kim, Y . H. Oh, Y . Park, Y . Song, J.-H.Park, S. Lee, K. Park, J. W. Lee et al. , “Aˆ 3: Accelerating attentionmechanisms in neural networks with approximation,” in 2020 IEEEInternational Symposium on High Performance Computer Architecture(HPCA) . IEEE, 2020, pp. 328–341.[27] A. Jain, A. Phanishayee, J. Mars, L. Tang, and G. Pekhimenko,“Gist: Efficient data encoding for deep neural network training,” in2018 ACM/IEEE 45th Annual International Symposium on ComputerArchitecture (ISCA) . IEEE, 2018, pp. 776–789.[28] S. Q. Zhang, B. McDanel, and H. Kung, “Fast: Dnn training undervariable precision block floating point with stochastic rounding,” in2022 IEEE International Symposium on High-Performance ComputerArchitecture (HPCA) . IEEE, 2022, pp. 846–860.[29] S. Venkataramani, V . Srinivasan, W. Wang, S. Sen, J. Zhang, A. Agrawal,M. Kar, S. Jain, A. Mannari, H. Tran et al. , “Rapid: Ai acceleratorfor ultra-low precision training and inference,” in 2021 ACM/IEEE48th Annual International Symposium on Computer Architecture (ISCA) .IEEE, 2021, pp. 153–166.6 |
4zdPNY3SDQk | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023Online Learning for Right-Sizing ServerlessFunction InvocationsPrasoon Sinha∗, Kostis Kaffes†, Neeraja J. Yadwadkar∗‡,∗The University of Texas at Austin. prasoon.sinha@utexas.edu, neeraja@austin.utexas.edu†Google SRG. kkaffes@gmail.com‡VMware Research.Abstract —Serverless computing relieves developers from theburden of allocating and managing resources for their cloud ap-plications, providing ease-of-use to the users and the opportunityto optimize resource utilization to the providers. However, thelack of visibility into user functions limits providers’ ability toright-size the functions. Thus, providers resort to simplifyingassumptions, ignoring input variability, and coupling differentresource types (CPU, memory, network), resulting in widelyvarying function performance and resource efficiency. To provideusers with predictable performance and costs for their functionexecutions, we need to understand how these factors contributeto function performance and resource usage.In this paper, we first conduct a deep study of commonlydeployed serverless functions on an open-source serverless com-puting framework. Our analysis provides key insights to guide thedesign of a resource allocation framework for serverless systems,including the need to provision resources per invocation, accountfor function semantics, and decouple resources. We then presentLachesis , a resource allocation framework that builds on theinsights we found and leverages online learning to right-size afunction invocation. Our experiments show that Lachesis canincrease speedup by 2.6x while decreasing idle cores by 82%compared to static allocation decisions made by users.I. I NTRODUCTIONA key benefit of serverless computing for users is that theyget to focus on their application logic and leave the details ofresource provisioning and management to the cloud providers.However, this results in an opaque interface between usersand providers that adversely impacts both. For users withperformance-critical applications, such as timely detectionof videos with indecent content uploaded to YouTube, orcost-minded applications, such as personal photo organiza-tion, unknown resource management policies that they cannotcontrol are a problem [12], [19]. Meanwhile, providers lackvisibility into user-functions limiting their ability to make cost-performance trade-offs on behalf of the users.Existing serverless systems either completely hide the re-source allocation policies they use [16], or provide a singleknob, the memory size of the container, that the user canset [5], [10]. This parameter is intended to give users controlover resource management and providers visibility into the re-source requirements of user functions. However, even with thisadditional input, serverless systems are incapable of providingperformance- and cost-aware function execution to users. Weargue that, to fix this issue, we need to first understandwhich factors impact function performance and how. We thenneed to study how the current resource allocation frameworks(a) Slowdown w.r.t. the bestruntime across mem sizesfor 100 invocations of avideo transcoding function.(b) Maximum mem utilizedvs. allocated across 100 runsof the video transcodingfunction (from Fig. 1a).Fig. 1: Characterizing functions with respect to the resources allo-cated, utilized, and performance observed.take these factors into account. Finally, we need to see thecombined impact of existing policies and these factors onfunction performance and resource efficiency. We review thefollowing policies and assumptions made by existing resourceallocation frameworks for serverless systems and motivate theneed for our characterization work.1. Static and input-agnostic allocation by providers:Providers statically allocate resources to functions using theuser-specified memory size. However, this approach ignoresthe fact that different inputs submitted to the same functionmight have different resource needs (as demonstrated by thespread in duration in Figure 1a). This precludes optimizationssuch as using smaller containers for smaller inputs, whichmight bring significant cost benefits. For instance, if the staticfunction-level allocator sized a container with 3GB memory,and most of the invocations used only 1GB, the costs incurredare 2x higher for allocated but unused resources. Hence, weneed to understand the impact of inputs and function semanticson function performance and resource utilization.2. Coupled allocation of different resource types byproviders: The specified memory size for a function dictatesthe number of CPU cores, thus, tightly coupling the two typesof resources together. There are two main limitations withthis approach: (a) Although users now need to only tune thememory knob, setting this knob correctly might be difficult forworkloads that are not memory-intensive but limited by otherresources. For instance, video transcoding or compressionare CPU intensive workloads. Users might have to profiletheir functions carefully, adding significant cost. (b) The tightcoupling of resources might lead to suboptimal resource allo-cation decisions for certain kinds of workloads. For example,1Function Input Type # Runs # Sizes Size Rangematmult square matrix 540 9 500 - 80000linpack square matrix 660 11 500 - 8000image-process image 840 14 12K - 4.6Mvideo-process video 645 5 2.2M - 6.1Mencrypt string 420 7 500 - 50000mobilenet image 840 14 12K - 4.6Msentiment batch of strings 716 12 50 - 3000speech2text audio 471 8 48K - 12Mqr url 660 11 25 - 480lr-train training set 160 4 10M - 100Mcompress file 434 7 64M - 2Gresnet-50 (inf) image 574 9 184K - 4.6MTABLE I: Summary of 12 serverless functions studied.CPU intensive workloads might end up being allocated largeamounts of memory that are not used (Figure 1b).3. Over-provisioning by users: The importance of the mem-ory size parameter [6] and its opaque coupling to otherresources forces users to profile their function to find theright setting. But, as the performance and resource usage of afunction can depend significantly on inputs, users must profileon diverse inputs to ensure adequate resource availability inall cases. The cost of doing so is prohibitive. Prior work onreducing this profiling cost either assumes knowledge of theworkload which is unavailable [4], [18], or ignores the inputitself, which can have a large impact on many functions [20].Thus, users often overprovision and choose the largest memorysize available (10GB for AWS Lambda, for instance), raisingtheir costs significantly and leading to underutilized resourcesfor the provider [2]. Hence, we need to understand how inputsaffect a function’s resource demands.In this paper we extensively study the impact of functioninputs and resource coupling on several serverless functionscovering a wide range of application types. Building on theinsights we found, we introduce Lachesis , an online learn-ing based resource allocation framework that (1) allocatesresources to each function invocation based on characteristicsof the input and function semantics , and (2) decouples differentresource types. Lachesis employs an online learning agentthat uses cost-sensitive multi-class classification to predictthe minimum number of cores required to satisfy a giveninvocation’s service level objective (SLO). It removes the needfor users to specify memory limits, and in doing so, Lachesisachieves betters resource utilization while simplifying theserverless user interface.II. E XISTING RESOURCE ALLOCATION MECHANISMSSeveral cloud providers, such as AWS Lambda [5] andGoogle Cloud Functions [10], and open-source communi-ties [17] expose a common interface to their serverless plat-forms: users specify a memory limit for their functions at cre-ation time. The platforms then allocate a proportional amountof CPU based on the memory limit. Thus, all invocations of afunction have the same container size, regardless of their actualresource needs. Apache OpenWhisk’s [17] CPU allocation is asoft limit, as invocations can burst if there are available CPUsin the server. Microsoft Azure [16] claims to automaticallyscale the resources allocated to functions, but its resourceFig. 2: Execution time as a function of data size for three serverlessfunctions. The CPU and memory limit is fixed across sizes.Fig. 3: video-process ’s (a) CPU and (b) memory utilization as afunction of video size. The CPU limit is fixed at 80 cores.allocation policies are unknown, making it difficult to reasonabout function performance. Cypress [7] creates containerswith high CPU count and memory capacity per function toconsolidate multiple concurrent function invocations withinone container to avoid wasting resources. Bilal et al., [8]propose to decouple memory and CPU to create a trade-offbetween performance and cost. ReSC [11] divides functionsinto resource components (i.e., compute or memory) andallocates resources per component.III. W HAT AFFECTS FUNCTION PERFORMANCE ?We study the impact of input properties (i.e., size, type),resource availability, and coupling of resource types, on theperformance and resource utilization of serverless functions.Experimental Setup: Our study observes functions on Open-Whisk [17]. We make two changes to OpenWhisk. (1) Weforce all CPU limits to be hard limits. (2) We decouple CPUand memory to explore different configurations than the fixedpairings provided by OpenWhisk. We deploy OpenWhisk ontwo bare-metal nodes in TACC’s Chameleon cluster [14].Each node contains 2 AMD EPYC 7763 CPUs, operatingat 2.45 GHz [1], and 251GB of memory. For performancepredictability, we disable hyperthreading, as done in [13],resulting in 128 online cores per machine. We install UbuntuLTS 18.04. One machine hosts the OpenWhisk Controller andCouchDB while the other hosts the Invoker to run functions.Workloads: We study 12 functions (see Table I) from liter-ature and benchmark suites [7], [9], [15] covering scientificapplications, data processing, and machine learning (ML) in-ference serving and training. We collect the execution time andmemory/CPU utilization for several combinations of functions,input sizes, and CPU limits. We run each combination 8-10times, for a total of ∼8K runs.A. Impact of Function InputsWe study two questions: (1) What impact does input sizehave on function performance? (2) Do input properties, otherthan size, affect function performance and resource utilization?2Fig. 4: Execution time as a function of CPU limit for three of ourserverless functions. The input size is fixed at max value.Fig. 5: CPU utilization compared to allocation for three of ourserverless functions. The input size is fixed at max value.Observations: Figure 2 presents input size vs. execution timefor three functions (we omit the others due to space con-straints). We note that regardless of input type (matrix, files,text) or function semantics (i.e., single- vs. multi-threaded),function performance is correlated with input size and dependson whether the function is single or multi-threaded .Figure 3a compares the number of cores used by video-process on two input sets of different videos. We see that twoinputs of the same size may vastly differ in the number of coresused. We also notice that while set-1 has an unpredictablerelationship between input size and cores used, set-2 exhibitsconstant utilization regardless of video size.To understand these differences, we compare video proper-ties beyond just size: frame rate per second, video length, bitrate, and video resolution. We find that the resolution is the keyproperty affecting resource utilization. While the resolution isconstant in set-2 ( 1280×720), it widely varies between thedifferent video sizes in set-1. Inputs with higher resolutions(1280×720) have lower CPU and higher memory utilization,whereas the inverse is seen for lower resolution inputs.Insights: Function semantics and input properties (not justlimited to size) affect performance and resource utilization.Existing resource allocators that ignore input properties be-yond size are thus suboptimal. Instead, functions can benefitfrom allocators that account for inputs and function semantics.B. Impact of Added ResourcesWe now evaluate the effect of adding resources to a function.Observations: Figures 4a and 4c show that lr-train andresnet-50 can benefit from more cores (execution time de-creases). matmult ,compress , and linpack also exhibit thesetrends. However, lr-train shows that the gains of increasingCPU saturate: execution time does not improve beyond 8cores. In fact, Figure 5a shows that utilization never surpasses5 cores. lr-train uses scikit-learn’s LogisticRegressionCV()with njobs=-1 to implement training. This setting specifies touse as many cores as possible. Since lr-train does not specifythe number of cross-validation folds, 5 folds (the default) areRegistrarDispatcherControllersubmit (f, mem)invoke (f, i)f(i)Worker Nodes(cpu, mem)ControllerAgent(cpu, mem)DBsubmit (f)invoke (f, i, slo)Metadata(a) Current Serverless(b) Lachesis Design(util, perf)(cpu, mem)f(i)Worker NodesDBRegistrarDispatcherDaemonDaemonFig. 6: (a) Current serverless platforms vs (b) Lachesis.in the loop for training, and thus at most 5 cores are fullyutilized.Meanwhile, Figure 4b shows that image-process does notbenefit from more cores, even though its performance is input-dependent as explained in § III-A. Figure 5b shows thatregardless of CPU allocation, utilization is always hoveringaround 1 core. In fact, several of our functions are single-threaded: mobilenet ,sentiment ,encrypt , and speech2text .Insights: Serverless platforms see a mix of single- andmulti-threaded functions with potentially bounded parallelism.Adding resources may not always help. Hence, resource allo-cators should tailor their policies to suit the type of function.C. Impact of Coupled Resource TypesExisting allocation policies scale CPU in proportion tothe user-specified memory size [5]. However, this assumesfunctions are both CPU- and memory-intensive. Here, weevaluate the accuracy of this assumption.Observations: Figure 3 shows that video-process uses upto 50 cores, but its memory utilization is at most 41%(0.8GB). Thus, video-process (also matmult ,linpack , and lr-train ) is compute-intensive. Conversely, we found sentimentto be memory-bound (100% memory utilization while it usesat most 1 core). Thus, different functions may utilize resourcetypes in different proportions. Cloud providers can experiencesevere underutilization due to resource coupling. For example,providing enough memory to sentiment would lead to 50%underutilization of allocated vCPUs. Meanwhile, to allocate50 vCPUs to video-process would require a 88GB memoryallocation, resulting in ∼99% memory underutilization.Insights: It is imperative that allocators decouple resources toimprove utilization while meeting resource demands.IV. L ACHESIS DESIGN AND IMPLEMENTATIONWe now present Lachesis, a system that makes fine-grainedand decoupled resource allocations per invocation using an on-line learning agent. Figure 6a shows a simplified architectureof existing serverless systems. Figure 6b shows the changeswe make to the existing workflow of serverless frameworks:Lachesis simplifies the user interface by removing the needfor users to specify a static memory limit during functionsubmission. Instead, users can simply provide an SLO perinvocation . Given a function, input, and SLO, Lachesis aims toright-size invocations by dynamically allocating the minimumamount of resources to meet the SLO.3Algorithm 1 Lachesis’ logic using online learningInput fxn,in,slo1:Determine cpu lim: default or ModelPredict( in,slo)2:Launch fxn with given inand determined cpu lim3:Observe fxn’smax cpu andexec time during runtime4:Usemax cpu andexec time to ComputeCosts()5:Update online model: ModelUpdate( fxn,in,slo,costs )Algorithm 1 summarizes Lachesis’ logic. We focus onCPU allocation in this paper and leave memory allocationas future work. For an invocation, the online learner predictsthe minimum number of cores to allocate. Lachesis defaultsthis value if the learner has not seen enough invocations forthe given function. It then launches the invocation with thedetermined CPU limit and collects utilization and durationmetrics for feedback. Finally, it uses the observed data tocompute costs and updates the online learner after everyinvocation. Next, we describe the formulation of our onlinelearning agent.Prediction Target : As our goal is to meet user-specified SLOswith efficient use of resources, a natural prediction target isthe minimum number of cores a function invocation needs fora target execution time.Model Inputs : Our model’s inputs are the serverless function,user inputs, and an SLO. We built a function and inputfeaturizer that automatically extracts features from functionsand inputs. We extract function features that can potentiallyimpact its performance, such as the number of function calls,libraries used, and loop sizes. Unlike functions, we extractdifferent features for different input types. For example, forimages we extract the image’s file size and resolution, whereasfor a matrix we extract its size and density. We combine all thisdata to construct a vector for model updating and prediction.Feedback : On each worker machine, we deploy a daemon thatcaptures the maximum CPU utilization over the invocation’sruntime. This data is used by our cost function to update ourmodel’s weights online.Learning Algorithm : We approach predicting core count asa supervised learning problem, which can be solved withregression or classification. We opt to not use regressorsbecause of the difficulty in formulating a cost function to dif-ferentiate between underpredictions and overpredictions uponan SLO violation. Instead, we use cost-sensitive multi-classclassification to make predictions. Each class (core count) hasits own linear regressor that predicts the class’s cost for aninvocation. We select the class with the lowest cost as theallocation. Now, we can differentiate costs for different classeswithout worrying about the relationship between them.Cost Function : Our cost function is rather intuitive. First,we determine the class to assign the lowest cost of one to.There are three cases. (1) If an invocation’s SLO is met, themax cpu (i.e., the maximum number of cores used by theinvocation) class is given the lowest cost. Hence, if allocatedresources are not efficiently utilized, our agent can learn tomake smaller allocations for similar future invocations. (2) Ifan invocation’s SLO is met and all assigned cores are used, wemay assign a class lower than max cputhe lowest cost. Thisclass is determined based on the slack between the invocation’sexecution time and SLO. In doing so, we inform our onlinelearner that fewer cores may also satisfy this invocation’s SLO.(3) Upon an SLO violation, we assign a class greater thanmax cpu (at most +10) the lowest cost in an attempt tomeet the SLO in the next invocation. Similar to case (2), theslack determines this class. After determining the lowest costclass, the costs of the remaining classes grow linearly, withunderpredictions being penalized further by a hyperparameter.Implementation : We implement Algorithm 1 as a shim layerthat can run on top of any serverless platform. This layer runson the same node as our dispatcher. We use Apache Open-Whisk [17] as our base serverless platform and implement ouronline learning agent using V owpal Wabbit [3], a library withan efficient online implementation of the cost-sensitive multi-class classification algorithm. On each Invoker, we launch ametric aggregation daemon that collects utilization and runtimemetrics per invocation and persists the data in a Metadatastore for the shim layer to use when updating its models.Why Online Learning : The fundamental limitation of existingpublic serverless platforms [5], [10] is their inability to right-size containers dynamically based on inputs. Meanwhile, forCypress to achieve high utilization, arrival patterns need to befrequent enough to pack invocations in one container withinthe window of an SLO [7]. Hence, Cypress is susceptible tosevere resource underutilization with sparse resource arrivalpatterns. Finally, as shown in § III, it is infeasible to use heuris-tics to predict optimal resource allocation because of variationin function behaviors depending on function semantics andinput types/properties. This prompts our use of online learning,enabling Lachesis to dynamically right-size containers andadapt to changes in function and input distribution over time.V. E VALUATIONWe aim to show Lachesis’ efficacy in allocating resourcesper invocation. Specifically, we evaluate the impact of per-invocation allocations on the number of SLO violations, re-source utilization, and user cost.A. MethodologyBaselines : We compare Lachesis to three baselines, on Open-Whisk (ow), users might choose when providing resourceneeds to existing serverless platforms. Users may ask for themaximum, median, or minimum amount of resources for alltheir invocations. These correspond to our ow-large (64 cores),ow-medium (32 cores), and ow-small (1 core) baselines.Workloads : We evaluate Lachesis with three serverless func-tions from Table I: image-process ,matmult , and resnet-50 .While image-process is single-threaded, both matmult andresnet-50 are multi-threaded, showing the robustness of oursystem to both types of functions. For each function, we runover 100 invocations with over 60 different inputs for image-process and 20 for both resnet-50 andmatmult . The trace ofinvocations is the same on Lachesis and our three baselines.4Fig. 7: Difference in (a) SLO violation percentage and (b) idle coresbetween Lachesis and our baselines for three functions.Evaluation Metrics : Lachesis aims to meet an invocation’sSLO using the minimum number of cores it can. Hence, weare interested in two metrics: a function’s SLO violation ratioand CPU utilization. (1) Each function input has its own SLO(max execution time). We determined this value by profilingeach input with different allocations and extracting the bestexecution time we could achieve. We then increased this valueby 10% and considered this the input’s SLO. A function’s SLOviolation ratio is the number of SLO violations to the numberof invocations. (2) We report CPU utilization as the numberof idle allocated cores. This is because a 50% underutilizationusing12cores is not as severe as using only1632.B. ResultsWe compare the SLO violations (Figure 7a) and CPU uti-lization (Figure 7b) between Lachesis and the three baselines.Our baselines display an inherent tradeoff between meetinginvocation SLOs and achieving optimal CPU utilization. Whileow-large meets all SLOs, resource utilization is poor, as mostinputs do not require an allocation of 64 cores. Meanwhile,ow-small is unable to meet any of the SLOs (100% viola-tion) for our multithreaded-functions ( matmult ,resnet-50 ), butachieves perfect CPU utilization because every function uses atleast 1 core. The ow-medium baseline allocates 32 cores to allinvocations. While 32 cores are enough for many invocations,there are still plenty of inputs that require more than 32 coresto meet the SLO. Lachesis dynamically learns the minimumcore count required to meet the SLO, thereby reducing thenumber of idle allocated cores while decreasing the numberof SLO violations compared to ow-medium. This translatesinto a significant impact on user cost, as for resnet-50 aloneLachesis reduces cost by 63% for 100 invocations.Figure 8 shows Lachesis’ number of idle cores and SLOviolations over the course of 100 invocations of resnet-50with various inputs. It takes 28 invocations for Lachesis tostabilize and learn the minimum number of cores required fordifferent inputs. For the remaining invocations, the number ofidle cores is less than 8, except for one spike at invocation 47.Interestingly, throughout the course of the 100 invocations,there continues to be periodic SLO violations. We noticedthat these violations are for the same input, which had anunrealistic SLO. For each invocation of this input, Lachesiswould allocate more cores in an attempt to meet the SLO,however even with the max 64 cores, the SLO was never met.Fig. 8: A timeline view of Lachesis’ number of unused cores (blue)and SLO violations (green) over 100 invocations of resnet-50 .VI. C ONCLUSIONFor ease-of-use and resource efficiency of serverless plat-forms, our analysis motivates that resource allocation shouldbe fine-grained per invocation and per resource type, toaccount for various input properties. We present Lachesisthat uses an online learner to predict the number of coresrequired to meet an invocation’s SLO and show its efficacy inimproving performance, resource utilization, and user cost.Future Work: Lachesis paves the path for the following nextsteps: (1) Currently, Lachesis creates one online agent perfunction due to the variable number of features extracted fromdifferent input types (e.g., video, audio). We plan to standard-ize features to enable a single agent to allocate resources for allfunctions. (2) Lachesis decouples resource types, but currentlyfocuses on only making CPU allocations. We will augmentLachesis by allocating memory per invocation as well. (3)While per-invocation allocations help as we demonstrated inthis paper, customized allocations per invocation also increasethe number of containers used per function, thereby increasingthe number of cold-starts. Cold-starts often worsen functionperformance. We will design a scheduler that closely inter-acts with our resource allocator to strike the right trade-offbetween improved utilization due to fine-grained allocationsand resulting cold-starts.REFERENCES[1] “Introducing compute optimized vms powered by amd epyc pro-cessors,” https://cloud.google.com/blog/products/compute/introducing-compute-optimized-vms-on-amd-epyc-milan.[2] “The state of kubernetes report: Overprovisioning in real-lifecontainerized applications,” https://cast.ai/the-state-of-kubernetes-overprovisioning/.[3] “V owpal wabbit,” https://vowpalwabbit.org/index.html.[4] O. Alipourfard, H. H. Liu, J. Chen, S. Venkataraman, M. Yu,and M. Zhang, “Cherrypick: Adaptively unearthing the best cloudconfigurations for big data analytics,” in 14th USENIX Symposium onNetworked Systems Design and Implementation (NSDI 17) . Boston,MA: USENIX Association, Mar. 2017, pp. 469–482. [Online].Available: https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/alipourfard[5] “AWS Lambda,” https://aws.amazon.com/lambda/.[6] “Serverless applications lens: Aws well-architected framework,”https://d1.awsstatic.com/whitepapers/architecture/AWS-Serverless-Applications-Lens.pdf.[7] V . M. Bhasi, J. R. Gunasekaran, A. Sharma, M. T. Kandemir, andC. Das, “Cypress: Input size-sensitive container provisioning andrequest scheduling for serverless platforms,” in Proceedings of the13th Symposium on Cloud Computing , ser. SoCC ’22. New York,NY , USA: Association for Computing Machinery, 2022, p. 257–272.[Online]. Available: https://doi.org/10.1145/3542929.35634645[8] M. Bilal, M. Canini, R. Fonseca, and R. Rodrigues, “With great freedomcomes great opportunity: Rethinking resource allocation for serverlessfunctions,” 2021. [Online]. Available: https://arxiv.org/abs/2105.14845[9] M. Copik, G. Kwasniewski, M. Besta, M. Podstawski, andT. Hoefler, “Sebs: A serverless benchmark suite for function-as-a-service computing,” in Proceedings of the 22nd InternationalMiddleware Conference , ser. Middleware ’21. New York, NY , USA:Association for Computing Machinery, 2021, p. 64–78. [Online].Available: https://doi.org/10.1145/3464298.3476133[10] “Google cloud functions,” https://cloud.google.com/functions/.[11] Z. Guo, Z. Blanco, M. Shahrad, Z. Wei, B. Dong, J. Li,I. Pota, H. Xu, and Y . Zhang, “Decomposing and executingserverless applications as resource graphs,” 2022. [Online]. Available:https://arxiv.org/abs/2206.13444[12] E. Jonas, J. Schleier-Smith, V . Sreekanti, C.-C. Tsai, A. Khandelwal,Q. Pu, V . Shankar, J. Menezes Carreira, K. Krauth, N. Yadwadkar,J. Gonzalez, R. A. Popa, I. Stoica, and D. A. Patterson, “Cloudprogramming simplified: A berkeley view on serverless computing,”EECS Department, University of California, Berkeley, Tech. Rep.UCB/EECS-2019-3, Feb 2019. [Online]. Available: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-3.html[13] K. Kaffes, N. J. Yadwadkar, and C. Kozyrakis, “Practical scheduling forreal-world serverless computing,” November 2021.[14] K. Keahey, J. Anderson, Z. Zhen, P. Riteau, P. Ruth, D. Stanzione,M. Cevik, J. Colleran, H. S. Gunawi, C. Hammock, J. Mambretti,A. Barnes, F. Halbach, A. Rocha, and J. Stubbs, “Lessons learned fromthe chameleon testbed,” in Proceedings of the 2020 USENIX AnnualTechnical Conference (USENIX ATC ’20) . USENIX Association, July2020.[15] J. Kim and K. Lee, “Functionbench: A suite of workloads for serverlesscloud function service,” in 2019 IEEE 12th International Conference onCloud Computing (CLOUD) , 2019, pp. 502–504.[16] “Microsoft Azure Functions,” https://azure.microsoft.com/en-us/services/functions/.[17] “Apache OpenWhisk,” https://openwhisk.apache.org/.[18] S. Venkataraman, Z. Yang, M. Franklin, B. Recht, and I. Stoica, “Ernest:Efficient performance prediction for large-scale advanced analytics,”in13th USENIX Symposium on Networked Systems Design andImplementation (NSDI 16) . Santa Clara, CA: USENIX Association,2016, pp. 363–378. [Online]. Available: https://www.usenix.org/conference/nsdi16/technical-sessions/presentation/venkataraman[19] L. Wang, M. Li, Y . Zhang, T. Ristenpart, and M. Swift, “Peekingbehind the curtains of serverless platforms,” in 2018 USENIXAnnual Technical Conference (USENIX ATC 18) . Boston, MA:USENIX Association, Jul. 2018, pp. 133–146. [Online]. Available:https://www.usenix.org/conference/atc18/presentation/wang-liang[20] N. J. Yadwadkar, B. Hariharan, J. E. Gonzalez, B. Smith, andR. H. Katz, “Selecting the best vm across multiple public clouds:A data-driven performance modeling approach,” in Proceedings ofthe 2017 Symposium on Cloud Computing , ser. SoCC ’17. NewYork, NY , USA: ACM, 2017, pp. 452–465. [Online]. Available:http://doi.acm.org/10.1145/3127479.31316146 |
el1iPvMqqTY | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023Sample-Efficient Mapspace Optimization for DNNAccelerators with Bayesian LearningGrace Dinh∗, Iniyaal Kannan∗, Hengrui Luo†, Charles Hong∗Younghyun Cho∗, James Demmel∗, Xiaoye Sherry Li†, Yang Liu†∗UC Berkeley. dinh,iniyaalkannan,charleshong,younghyun,demmel@berkeley.edu†Lawrence Berkeley National Lab. hrluo,xsli,liuyangzhuan@lbl.govAbstract —Achieving high performance for machine learningdomain-specific accelerators requires the careful choice of amapping from the algorithm to the accelerator. Most algorithmsfor finding mappings either optimize over a coarse performancemodel or experimentally evaluate the performance of a largenumber of different mappings on the space. However the numberof samples required by these empirical models can be prohibitivein settings (e.g. when using cycle-accurate simulators) whereevaluations are expensive.This paper evaluates the use of Bayesian optimization basedapproaches for finding mappings for hardware accelerators insettings where high sample efficiency is required. Our approachesconverge to mappings comparable to that of Timeloop’s mapperwhile requiring an order of magnitude fewer iterations. Further-more, our method produces surrogate models that can be usedfor transfer learning to new hardware configurations, furtherreducing the sample complexity by roughly a factor of 2.I. I NTRODUCTIONDomain-specific hardware accelerators have become in-creasingly important in enabling efficient, performant exe-cution of linear algebra and machine learning applications.However, attaining good performance on accelerators requiresa careful choice of a mapping describing how the algorithmis to be executed on the target accelerator. These mappings,which encompass choices such as tiling dimensions, loopordering, and spatio-temporal mappings (i.e. deciding whichaxes to map to accelerator parallelism), can significantly affectperformance by up to four orders of magnitude [17].However, the space of possible mappings ( mapspace ) ischallenging to search, as the number of choices that comprisea mapping leads to a combinatorial explosion in the number ofpossible mappings. Furthermore, this space is highly noncon-vex and changes significantly with the target architecture; asa result, it is desirable for mapping methods to be efficientlygeneralizable across a wide variety of hardware parameters andarchitectures, especially in settings such as hardware-softwarecodesign where efficient mappings must be computed for awide variety of both algorithmic and hardware targets.In order to handle complexity of searching over a high-dimensional mapspace, many approaches rely on performancemodels that are either mathematically simple enough to beoptimized over [10], [37] or cheap enough to be queriedthousands or tens of thousands of times [25] in brute-forceapproaches. However, as we show in Section II, these modelsoften diverge from actual performance significantly, limitingtheir effectiveness.Alternatively, feedback-driven approaches, which iterativelysearch over the mapspace using black-box search (e.g. geneticalgorithms or reinforcement learning) or gradient descent,can be guided not only by performance models but also bymeasured or simulated performance. However, many previousmethods are extremely sample-inefficient, sometimes requiringmillions of samples to train models [8], and have difficultygeneralizing to new hardware targets or problem dimensions[13]. This poses unique challenges for the case of hardwaredesign-space exploration (DSE), where mappings must becomputed for a variety of hardware targets.This paper explores the use of Bayesian optimization tofind performant mappings in a sample-efficient manner, whileconstructing surrogate functions for performance that can beefficiently queried and optimized. We then use these surro-gate functions to perform transfer learning across differenthardware configurations, showing that, in contrast to previousgradient-based approaches [8], [13] and reinforcement learningapproaches [12], our approach can generalize to hardware con-figurations not in its training set. Our approach also providesautomatic problem-specific sensitivity analysis for mapspaceparameters during optimization.II. B ACKGROUNDAlgorithms for finding performant mappings generally fallinto one out of three categories:•Heuristics perform one-shot analytic optimizations overa performance model (either defined explicitly to be usedas an optimization target, or embedded implicitly in theheuristic). Such methods include polyhedral models [1],[7], [18] and constrained-optimization based approaches[10], [36]. While heuristics are efficient and generalizeeasily across hardware parameters and problem sizes,they are often limited to optimizing a subset of themapspace (e.g. tilings and reorderings only, as in [24]),and rely on an analytic model which can only coarselyapproximate performance.•Random search methods [5], [25], [35] use brute forceto sample and evaluate a large number of points in themapspace. However, the size of the mapspace (well over11020points [26]) necessitates a large number of samplesto achieve good performance.•Feedback-driven methods use statistical or ML methodsto iteratively explore the mapspace. These approaches in-clude black-box optimization techniques such as geneticalgorithms [12], [14], reinforcement learning [34], andBayesian optimization [29], which aim to require fewersamples than brute-force methods. Alternatively gradient-based methods [8] build differentiable surrogate functionsthat estimate performance based on input parameters.However, training gradient-based methods require mil-lions of samples to build surrogate models, and theresulting surrogates cannot easily be used for hardwarearchitectures not yet seen even after fine-tuning, and mustbe fully retrained from scratch at nontrivial expense [13].The aforementioned methods tend to be heavily reliant onperformance models to attain good results. This reliance isexplicit in the case of heuristic-based approaches that optimizeobjectives that model performance. On the other hand, thereliance on efficient models is implicit for random search andmany feedback-driven methods: their reliance on thousands oreven millions of performance samples renders them impracti-cal for use when the cost of performance evaluation is high,e.g. when using cycle-accurate RTL emulators such as FireSim[15], which can take several minutes to run a single neural netlayer on a standard AWS FPGA. This is further exacerbatedin the hardware DSE setting, where performance evaluationmust be performed not only for each mapping but also foreach potential hardware backend. As a result, these methodsare often run using fast analytic models such as Timeloop [25]and Maestro [19] instead.However, such analytic models can diverge from actualperformance significantly [33]. To see the significance of thisdifference, we generated 2000 randomly chosen mappingsfor a variety of convolutions and matrix multiplication prob-lems. We then evaluated the cost of these mappings, firstby executing the code corresponding to these mappings onthe GEMMINI [6] DNN accelerator running on the cycle-accurate Firesim [15] RTL emulation platform, then by usingan inexpensive analytic performance model, Timeloop [25].Hardware parameters (memory bandwidth/sizes, systolic arraydimensions, etc.) were identical for both targets. Figure 1shows a log-log scatter plot of the ratios of the cycle countsgenerated by Timeloop and Firesim, which can differ by upto two orders of magnitude.As a result, we wish to develop feedback-driven methodsfor finding performant methods with high sample efficiency .Furthermore, we would like our approach to generalize in-expensively to new hardware configurations. Our approach isto to use Bayesian optimization, which has been shown tobe effective for optimizing complex functions with a limitednumber of evaluations due to its faster convergence and abilityto handle multiple parameters. One of its key strengths is theconstruction of surrogate models using Gaussian processes,which are powerful tools for modeling complex interactionsand potentially noisy functions. Bayesian approaches haveTimeloop=10×FiresimTimeloop=FiresimTimeloop=0.1×FiresimTimeloop=0.01×Firesim1000 1041051061071081001000104105106107108Firesim cycle countTimeloop cycle countFig. 1. Convergence of GPTune and Timeloop’s random-search mapperbeen used to perform black-box optimization in domainswhere sample efficiency is paramount, including algorithmoptimizations on supercomputers [4], [21] and optimizinghardware parameters for accelerators [23], [27], [34]. We applysimilar techniques to optimize over software mapspaces foraccelerators, improving on previous attempts to do so [30]by using an efficient encoding scheme to embed mappingparameters into a mathematical space that can be more easilysearched.This paper considers mapspaces consisting of tile sizes andloop orderings (dataflows) for multilevel memory hierarchies.Our techniques directly generalize to mapspaces includingspatio-temporal mappings as well; we leave benchmarkingthose to future work.A. Mapspace EncodersBayesian optimization assumes an objective function (inthis case, a performance metric such as latency, cycle count,or energy) that takes as input a set of numerical, usuallycontinuous, variables. However, decisions that comprise apoint in the mapspace, such as loop orderings, are discrete.These discrete variables fall into one out of two categories.Discrete numerical variables , such as tile sizes, are mostlyintegral. However, depending on the hardware target, theirdiscreteness may take the form of a requirement to be amultiple (e.g. of the size of a vector unit) or factor (e.g. of theproblem size, to allow for perfectly nested loops without tailcases) of some integer. We represent such variables as contin-uous variables in the optimization program and round them inorder to find the actual mapping parameters; such roundingapproaches have experimentally been shown to match orexceed discrete surrogate based approaches [16].Categorical variables , such as loop orderings, are membersof a finite, unordered set. These variables can be dealt with inone of two ways:•bydirectly applying surrogate-based optimization ap-proaches to them. Handling categorical variables in op-timization, especially in Bayesian optimization, posesunique challenges due to their discrete and unorderednature, especially in domains comprised of both contin-uous and categorical variables. Several approaches for2integrating the continuous and categorical optimizationmethods have been studied, including one-hot encoding[31], bandit models [28], and hybrid Monte Carlo treesearch [21]. However, the combinatorial complexity ofthe mapping problem complicates such approaches; forinstance, a batched convolution with seven nested loopshas7! = 5040 possible loop orderings per memory level,and general categorical optimization methods are unableto take problem-specific information that could guidesearch over this space into account (for instance, thatperformance is likely to be changed less by swapping theorder of two loops than reversing the entire loop nest).•by creating a mapspace-specific encoding from contin-uous variables. For example, for loop orderings, weoptimize over scores for each axis and order the axesfrom lowest to highest score. A similar approach, as in[20], can be used for spatio-temporal mappings.Dealing with hardware constraints. The set of valid map-pings is bound by a set of constraints, which we will address inthis section by developing a mapping from an unconstrainedfeature space f= (0,1]d(for some integer d), which can beeasily optimized over, to the set of valid mappings. We willdenote axes of this feature spaces as fi.Many constraints are simple constant bounds on the numericvariables (for instance, ensuring that tile sizes must be smallerthan the sizes of the data tensors) and can be dealt with byscaling the variables appropriately. For instance, instead ofoptimizing a loop tile size tunder the constraint that it isbounded above by the size sof the input problem, we caninstead optimize a value f∈(0,1]and set t=sf.However, other constraints may result in more complicatedinequalities. For example, consider a 2D convolution with bbatches, cinput channels, koutput channels, and windows ofsizer×s, outputs of size w×h. If we wish to tile these axesin such a way that the tiled inputs and weights can fit into ascratchpad of size M, the tile sizes tb, tc, ...must satisfy thefollowing constraint:tctktrts+tbtc(tw+tr)(th+ts)≤M (1)Rejection sampling is often used to handle such constraints,but has two drawbacks. First, setting an objective value toassign to invalid mappings is a nontrivial hyperparameter op-timization problem; an overly high value can cause unwantedbehavior in a learned surrogate function (leading to unpre-dictable behavior when, for instance, doing transfer learning),while an overly low value may not be enough to discouragethe optimizer from considering invalid maps. Furthermore,the rejection probability can be high - increasingly so asthe dimensionality increases - significantly driving up thenumber of iterations required. In fact, prior work [30] requiressampling 22Kpoints in order to produce 150valid mappings,which drastically increases the cost of this approach.As a result, our goal is to develop a mapping from everypoint of fto avalid point in the mapspace. Since all nontrivialconstraints in the mapspace take the form of capacity con-straints similar to that of (1) over the tile sizes [10], we insteadoptimize the aspect ratio of the tiles, and then scale all the tilesizes by the same factor to maximize memory utilization. Webelieve this approach also improves the ability of the learnedmodel to generalize across problem sizes, as communication-optimal tiles for many problems such as matrix multiplication[11] retain the same aspect ratio (square tiles) as long asproblem sizes are sufficiently large.More concretely, consider the memory constraint given in(1). Instead of directly optimizing over the tile sizes tb,c,... ,we optimize the variables fb,c,...∈(0,1], which we scale byacommon multiplier αto obtaintb,c,...≈αfb,c,... (2)In order to determine the value of α, notice that substituting(2) into (1) gives the following inequality:α4[fcfkfrfs+fbfc(fw+fr)(fh+fs)]≤M (3)As there is no reason not to maximize memory utilization, wereplace the inequality with equality, which therefore providesthe value of α:α=Mfcfkfrfs+fbfc(fw+fr)(fh+fs)1/4We can then round the resulting values of αfb,c,... down to thenearest valid value (to satisfy discreteness and maximum tilesize constraints) of tb,c,... , ensuring that each point ⃗f∈(0,1]dcorresponds to a valid mapping.III. E VALUATIONFor our experiments, we optimize for energy cost on ahardware model based on GEMMINI [6] with a four-levelmemory hierarchy: a register, an accumulator for the outputs,a scratchpad for the inputs and weights, and DRAM. Wetest our mappings on Timeloop [25], which takes as inputan algorithm and a hardware configuration and provides (1)an analytic performance model for energy and latency and (2)a pruned random search based mapper . While our approach isdesigned to target hardware models with far higher per-samplecost than than Timeloop’s model, we use Timeloop in order toallow for the use of its random-search mappers (which wouldbe infeasibly expensive if run with a cycle-accurate simulator)as a comparison target. We leave benchmarking on an (ex-pensive) cycle-accurate simulator and comparing performanceto model-based (brute-force and heuristic) mappers to futurework.To perform Bayesian optimization, we use GPTune [3],an autotuning suite designed for optimizing applications byutilizing Bayesian approaches. GPTune incorporates multi-tasklearning and transfer learning algorithms to share knowledgeof obtained performance samples among multiple tasks, im-proving tuning results. It enables quick prediction of optimaltuning parameters for new tasks using data from existing tasks.Additionally, GPTune supports multi-objective tuning, hybridmodels [21] for mixed categorical and continuous variables,and non-smooth objective tuning [22].In order to reduce statistical variance, all experiments wereaveraged over three independent runs.3Fig. 2. Energy (lower is better) attained by Timeloop’s brute-force mapper andGPTune for matrix multiplication (left) and 2D convolution (right). GPTunewas run for 100 iterations; the best value found is indicated with dotted lineextending to the right.A. ConvergenceFigure 2 shows the energy consumption of the best map-ping found so far at each iteration, comparing Timeloop andGPTune (we run 100iterations for GPTune).For GPTune, we show results for both approaches to opti-mizing over categorical variables (in this case, loop orderings)described in Subsection II-A. We note that directly embeddingloop orderings into the problem produces superior results tothe score-based approach for matrix multiplication but inferiorresults for convolutions, likely because of the higher dimen-sionality of convolutions compared to matmuls: a 3-nestedloop matmul leads to roughly (3!)4= 1296 choices for looporderings over the four levels of the memory hierarchy, whilethe7-nested loop convolution results in roughly (7!)4≈6e14choices. This suggests using categorical encodings works wellfor relatively low-dimensional problems, whereas score-basedencodings are better for higher-dimensional problems.For matrix multiplication, GPTune converges in roughly 20runs to 2.98pJ/compute, a value 16% better than the 3.56pJ/compute that Timeloop achieves after 4000 runs (note thatTimeloop plateaus after an average of 420iterations).For 2D convolutions, GPTune converges in (on average)50iterations to a minimum of 5.26pJ/compute, a value thatit took an average of 627 iterations for Timeloop to beat.Furthermore, after 4000 iterations, Timeloop’s best value was4.32pJ/J, roughly 17% better than GPTune’s.B. Transfer LearningIn many settings, such as hardware DSE, the ability to lever-age data collected on one or more hardware configurations toguide search on a hitherto unseen hardware configuration canprove useful. However, support for transfer learning acrosshardware configurations has proven limited so far. Randomsearch and many black-box optimization algorithms, suchas genetic algorithms (e.g. GAMMA [12]) do not supporttransfer learning and must be run from scratch for everyhardware target and algorithm. Attempts to apply the dif-ferentiable surrogate models found by Mind Mappings [8]to hardware architectures not in the training set resulted inperformance one to two orders of magnitude worse than run-ning Timeloop’s random mapper and GAMMA from scratch.Fig. 3. Transfer learning to a new (not in training set) hardware configurationfor matrix multiplication, compared to GPTune with no prior knowledge andTimeloop.Previous Bayesian optimization based approaches to mapspacesearch [30] have not considered transfer learning.The Gaussian process surrogate models produced by GP-Tune possess the capability to facilitate transfer learning. Wefirst train a surrogate model taking into consideration both taskparameters (i.e., tensor dimensions) and hardware parameters(i.e., memory hierarchy specifications), utilizing GPTune’smultitask learning algorithm for four distinct memory hier-archy configurations. Subsequently, we refine this model for20 iterations, employing the target hardware configuration thatwas absent from the initial training set.Figure 3 shows the transfer learning, which convergesto a mapping providing 3.81pJ/compute (on par with anuninitialized GPTune) in 10 iterations (roughly half that ofan uninitialized GPTune). This figure requires Timeloop anaverage of 1600 iterations to beat.C. Sensitivity AnalysisThe surrogate models can be used for sensitivity analysis aswell, by applying Sobol analysis [32] to attribute the part of thevariance of the output can be attributed to each of the inputs.We leverage GPTune’s sensitivity analysis interface, whichinternally invokes SALib [9] for computing Sobol indices fromthe trained surrogate model. For matrix multiplication, themost important axes were the tilings of the 64×512×128matrix multiplication example shown in Figure 2, the mostimportant axes were the tilings of kat the register andaccumulator levels, and the tiling of jat the register level;the surrogate model was several orders of magnitude moresensitive to the tiling parameters than the loop ordering ones,which lines up with previous work [13], [35] showing thattilings are the most important mapping parameter.For high-dimensional problems such as convolutions, we be-lieve this surrogate model may be used to perform automateddimension reduction - perhaps even during the optimizationprocess itself; we leave this to future work.4IV. C ONCLUSION , DISCUSSION ,AND FUTURE WORKThis paper demonstrates the feasibility of using Bayesianoptimization to perform mapspace search using very few(under 100) samples, which can be reduced even further byusing transfer learning from data collected for other hardwareconfigurations. Key to our approach is the construction ofan encoding scheme that ensures that every point in thesearch space given to the Bayesian algorithm corresponds toa valid mapping. The clearest application of this approach isto settings where the cost of measuring performance data isexpensive, such as running code on cycle-accurate simulators,especially in the context of hardware design-space exploration.The construction of compiler infrastructure required to test theperformance of our approaches on these simulators is currentlyongoing.It may also be interesting to extend the Bayesian multitasklearning approaches for transfer learning across hardwareconfigurations , as seen in this paper, to transfer learning acrossdifferent hardware simulation fidelities . Using a large numberof samples from a cheap but coarse model to guide searchover a much more expensive space may allow for explorationof a large portion of the mapspace without wholly relying onthe accuracy of performance models.In a similar vein, we wish to experiment with the use ofanalytic one-shot models, such as CoSA [10] and theoreticallyoptimal tiling methods based on Brascamp-Lieb inequalities[2], [24]. While these methods are reliant on analytic modelsfor performance and may not be optimal in practice on realhardware, they can provide cheap initial data points that mayhelp to significantly accelerate search.We are also investigating more sophisticated approaches forhigh-dimensional Bayesian optimization that combine multipletechniques that are tailored to various application domains.REFERENCES[1] A. Acharya, U. Bondhugula, and A. Cohen, “An approach for findingpermutations quickly: Fusion and dimension matching,” arXiv preprintarXiv:1803.10726 , 2018.[2] A. Chen, J. Demmel, G. Dinh, M. Haberle, and O. Holtz,“Communication bounds for convolutional neural networks,” inProceedings of the Platform for Advanced Scientific ComputingConference , ser. PASC ’22. New York, NY , USA: Association forComputing Machinery, 2022. [Online]. Available: https://doi.org/10.1145/3539781.3539784[3] Y . Cho, J. W. Demmel, G. Dinh, X. S. Li, Y . Liu, H. Luo,O. Marques, and W. M. Sid-Lakhdar, “GPTune user guide,” 2022.[Online]. Available: https://github.com/gptune/GPTune/tree/master/Doc[4] Y . Cho, J. W. Demmel, J. King, X. S. Li, Y . Liu, and H. Luo, “Harnessingthe Crowd for Autotuning High-Performance Computing Applications,”inThe 37th IEEE International Parallel and Distributed ProcessingSymposium (IPDPS23) . IEEE, 2023, pp. 1–12.[5] S. Dave, Y . Kim, S. Avancha, K. Lee, and A. Shrivastava,“DMazeRunner: Executing perfectly nested loops on dataflowaccelerators,” ACM Transactions on Embedded Computing Systems ,vol. 18, no. 5s, pp. 1–27, oct 2019. [Online]. Available:https://doi.org/10.1145%2F3358198[6] H. Genc, S. Kim, A. Amid, A. Haj-Ali, V . Iyer, P. Prakash, J. Zhao,D. Grubb, H. Liew, H. Mao, A. Ou, C. Schmidt, S. Steffl, J. Wright,I. Stoica, J. Ragan-Kelley, K. Asanovic, B. Nikolic, and Y . S. Shao,“Gemmini: Enabling systematic deep-learning architecture evaluationvia full-stack integration,” in Proceedings of the 58th Annual DesignAutomation Conference (DAC) , 2021.[7] T. Grosser, H. Zheng, R. Aloor, A. Simb ̈urger, A. Gr ̈oßlinger, and L.-N. Pouchet, “Polly-polyhedral optimization in llvm,” in Proceedings ofthe First International Workshop on Polyhedral Compilation Techniques(IMPACT) , 2011.[8] K. Hegde, P.-A. Tsai, S. Huang, V . Chandra, A. Parashar, and C. W.Fletcher, “Mind mappings: enabling efficient algorithm-acceleratormapping space search,” in Proceedings of the 26th ACM InternationalConference on Architectural Support for Programming Languagesand Operating Systems . ACM, apr 2021. [Online]. Available:https://doi.org/10.1145%2F3445814.3446762[9] J. Herman and W. Usher, “Salib: An open-source python library forsensitivity analysis,” Journal of Open Source Software , vol. 2, no. 9,p. 97, 2017. [Online]. Available: https://doi.org/10.21105/joss.00097[10] Q. Huang, M. Kang, G. Dinh, T. Norell, A. Kalaiah, J. Demmel,J. Wawrzynek, and Y . S. Shao, “Cosa: Scheduling by constrainedoptimization for spatial accelerators,” in 2021 ACM/IEEE 48th AnnualInternational Symposium on Computer Architecture (ISCA) . IEEE,2021, pp. 554–566.[11] D. Irony, S. Toledo, and A. Tiskin, “Communication lower boundsfor distributed-memory matrix multiplication,” Journal of Paralleland Distributed Computing , vol. 64, no. 9, pp. 1017–1026, 2004.[Online]. Available: https://www.sciencedirect.com/science/article/pii/S0743731504000437[12] S.-C. Kao and T. Krishna, “Gamma: Automating the hw mapping ofdnn models on accelerators via genetic algorithm,” in 2020 IEEE/ACMInternational Conference On Computer Aided Design (ICCAD) , 2020,pp. 1–9.[13] S.-C. Kao, A. Parashar, P.-A. Tsai, and T. Krishna, “Demystifying mapspace exploration for NPUs,” in 2022 IEEE International Symposiumon Workload Characterization (IISWC) . IEEE, nov 2022. [Online].Available: https://doi.org/10.1109%2Fiiswc55918.2022.00031[14] S.-C. Kao, M. Pellauer, A. Parashar, and T. Krishna, “Digamma:Domain-aware genetic algorithm for hw-mapping co-optimization fordnn accelerators,” in Proceedings of the 2022 Conference and Exhibitionon Design, Automation and Test in Europe , ser. DATE ’22. Leuven,BEL: European Design and Automation Association, 2022, pp. 232–237.[15] S. Karandikar, H. Mao, D. Kim, D. Biancolin, A. Amid, D. Lee,N. Pemberton, E. Amaro, C. Schmidt, A. Chopra, Q. Huang, K. Kovacs,B. Nikolic, R. Katz, J. Bachrach, and K. Asanovic, “Firesim: Fpga-accelerated cycle-exact scale-out system simulation in the public cloud,”in2018 ACM/IEEE 45th Annual International Symposium on ComputerArchitecture (ISCA) , 2018, pp. 29–42.[16] R. Karlsson, L. Bliek, S. Verwer, and M. de Weerdt, “Continuoussurrogate-based optimization algorithms are well-suited for expensivediscrete problems,” in Communications in Computer and InformationScience . Springer International Publishing, 2021, pp. 48–63. [Online].Available: https://doi.org/10.1007%2F978-3-030-76640-5 4[17] S. Kim, C. Hooper, T. Wattanawong, M. Kang, R. Yan, H. Genc,G. Dinh, Q. Huang, K. Keutzer, M. W. Mahoney, Y . S. Shao, andA. Gholami, “Full stack optimization of transformer inference: a survey,”02 2023. [Online]. Available: https://arxiv.org/pdf/2302.14017.pdf[18] M. Kong, R. Veras, K. Stock, F. Franchetti, L.-N. Pouchet, and P. Sa-dayappan, “When polyhedral transformations meet simd code genera-tion,” in Proceedings of the ACM SIGPLAN Conference on ProgrammingLanguage Design and Implementation (PLDI) , 2013.[19] H. Kwon, P. Chatarasi, M. Pellauer, A. Parashar, V . Sarkar, andT. Krishna, “Understanding reuse, performance, and hardware cost ofdnn dataflow: A data-centric approach,” in Proceedings of the 52ndAnnual IEEE/ACM International Symposium on Microarchitecture , ser.MICRO ’52. New York, NY , USA: Association for ComputingMachinery, 2019, pp. 754–768. [Online]. Available: https://doi.org/10.1145/3352460.3358252[20] Y . Lin, M. Yang, and S. Han, “Naas: Neural accelerator architecturesearch,” in 2021 58th ACM/IEEE Design Automation Conference(DAC) . IEEE Press, 2021, pp. 1051–1056. [Online]. Available:https://doi.org/10.1109/DAC18074.2021.9586250[21] H. Luo, Y . Cho, J. W. Demmel, X. S. Li, and Y . Liu, “Hybrid Modelsfor Mixed Variables in Bayesian Optimization,” arXiv:2206.01409 , pp.1–56, 2022.[22] H. Luo, J. W. Demmel, Y . Cho, X. S. Li, and Y . Liu, “Non-smoothBayesian Optimization in Tuning Problems,” arXiv:2109.07563 , pp. 1–61, 2021.[23] A. Mehrabi, A. Manocha, B. C. Lee, and D. J. Sorin, “Bayesianoptimization for efficient accelerator synthesis,” ACM Transactions on5Architecture and Code Optimization , vol. 18, no. 1, pp. 1–25, dec2020. [Online]. Available: https://doi.org/10.1145%2F3427377[24] A. Olivry, G. Iooss, N. Tollenaere, A. Rountev, P. Sadayappan,and F. Rastello, “IOOpt: automatic derivation of i/o complexitybounds for affine programs,” in Proceedings of the 42nd ACMSIGPLAN International Conference on Programming Language Designand Implementation . ACM, jun 2021. [Online]. Available: https://doi.org/10.1145%2F3453483.3454103[25] A. Parashar, P. Raina, Y . S. Shao, Y .-H. Chen, V . A. Ying, A. Mukkara,R. Venkatesan, B. Khailany, S. W. Keckler, and J. Emer, “Timeloop: Asystematic approach to dnn accelerator evaluation,” in 2019 IEEE inter-national symposium on performance analysis of systems and software(ISPASS) . IEEE, 2019, pp. 304–315.[26] S. Pati, S. Aga, N. Jayasena, and M. D. Sinclair, “Demystifying bert:Implications for accelerator design,” in International Symposium onWorkload Characterization (IISWC) , 2021.[27] B. Reagen, J. M. Hernandez-Lobato, R. Adolf, M. Gelbart, P. What-mough, G.-Y . Wei, and D. Brooks, “A case for efficient acceler-ator design space exploration via bayesian optimization,” in 2017IEEE/ACM International Symposium on Low Power Electronics andDesign (ISLPED) , 2017, pp. 1–6.[28] B. Ru, A. Alvi, V . Nguyen, M. A. Osborne, and S. Roberts, “Bayesianoptimisation over multiple continuous and categorical inputs,” in Inter-national Conference on Machine Learning . PMLR, 2020, pp. 8276–8285.[29] C. Sakhuja, Z. Shi, and C. Lin, “Leveraging domain information for theefficient automated design of deep learning accelerators,” in 2023 IEEEInternational Symposium on High-Performance Computer Architecture(HPCA) , 2023, pp. 287–301.[30] Z. Shi, C. Sakhuja, M. Hashemi, K. Swersky, and C. Lin, “Usingbayesian optimization for hardware/software co-design of neural accel-erators,” in Workshop on ML for Systems at the Conference on NeuralInformation Processing Systems (NeurIPS) , 2020.[31] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian opti-mization of machine learning algorithms,” in Proceedings of the 25thInternational Conference on Neural Information Processing Systems -Volume 2 , ser. NIPS’12. Red Hook, NY , USA: Curran Associates Inc.,2012, pp. 2951–2959.[32] I. M. Sobol, “Global sensitivity indices for nonlinear mathematicalmodels and their Monte Carlo estimates,” Mathematics and computersin simulation , pp. 271–280, 2001.[33] S. L. Xi, H. Jacobson, P. Bose, G.-Y . Wei, and D. Brooks, “Quantifyingsources of error in mcpat and potential impacts on architectural studies,”in2015 IEEE 21st International Symposium on High PerformanceComputer Architecture (HPCA) , 2015, pp. 577–589.[34] Q. Xiao, S. Zheng, B. Wu, P. Xu, X. Qian, and Y . Liang,“HASCO: Towards agile HArdware and software CO-design fortensor computation,” in 2021 ACM/IEEE 48th Annual InternationalSymposium on Computer Architecture (ISCA) . IEEE, jun 2021.[Online]. Available: https://doi.org/10.1109%2Fisca52012.2021.00086[35] X. Yang, M. Gao, Q. Liu, J. Setter, J. Pu, A. Nayak, S. Bell, K. Cao,H. Ha, P. Raina et al. , “Interstellar: Using halide’s scheduling languageto analyze dnn accelerators,” in Proceedings of the Twenty-Fifth Interna-tional Conference on Architectural Support for Programming Languagesand Operating Systems , 2020, pp. 369–383.[36] Y . Zhang, N. Zhang, T. Zhao, M. Vilim, M. Shahbaz, and K. Olukotun,“Sara: Scaling a reconfigurable dataflow accelerator,” in Proceedings ofthe 48th Annual International Symposium on Computer Architecture ,ser. ISCA ’21. IEEE Press, 2021, pp. 1041–1054. [Online]. Available:https://doi-org.libproxy.berkeley.edu/10.1109/ISCA52012.2021.00085[37] S. Zheng, R. Chen, A. Wei, Y . Jin, Q. Han, L. Lu, B. Wu, X. Li,S. Yan, and Y . Liang, “Amos: enabling automatic mapping for tensorcomputations on spatial accelerators with hardware abstraction.” inProceedings of the International Symposium on Computer Architecture(ISCA) , 2022.6 |
6d5El_LENnf | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023TAP: Efficient Derivation of Tensor Parallel Plansfor Large Neural NetworksZiji Shi∗†, Le Jiang†, Jie Zhang†, Xianyan Jia†, Yong Li†, Chencan Wu†, Jialin Li∗, Wei Lin†,∗National University of Singapore†Alibaba GroupAbstract —Model parallelism is essential to train large languagemodels efficiently. However, determining the optimal modelparallel schedule for a given neural network can be slow andinefficient due to the vast choice space. To address this challenge,we propose a tensor model parallelism framework called TAP,which automatically searches for the best data and tensor parallelschedules.Our approach is based on the observation that a neuralnetwork can be represented as a directed acyclic graph, withinwhich only exists a limited set of frequent subgraphs. With that,we design a graph pruning algorithm that efficiently folds thesearch space. As a result, TAP runs at sub-linear complexitywith respect to model size, which makes it a practical solutionfor large-scale networks.Experimental results demonstrate that TAP outperforms thestate-of-the-art automatic parallelism frameworks by 20−160×in searching time. Moreover, the performance of TAP’s discov-ered schedules is competitive with expert-engineered ones. Insummary, TAP provides a powerful and efficient tool for modelparallelism that can help alleviate the burden of manual tuning.I. I NTRODUCTIONRecent years have witnessed a burgeoning of large deepneural networks (DNNs) that deliver unprecedented accuracyacross a wide range of AI tasks. The rate of DNN model sizeincrease, however, has far surpassed the growth in acceleratormemory capacity. To address this challenge, model parallelismhas been proposed, where model weights are sharded ontomultiple devices during distributed DNN training.There are two main paradigms in model parallelism:pipeline parallelism and tensor parallelism. Pipeline paral-lelism divides the model into layers. Only activations arecommunicated during the forward pass, while gradient tensorsare exchanged in the backward phase. Pipeline parallelismhas recently drawn much attention, with many proposed al-gorithms aiming to find the optimal pipeline schedule thatminimizes the pipeline idle time (i.e., ”bubble size”). However,pipeline parallelism suffers from two significant drawbacks:1) each layer must fit into a single accelerator’s memory,and 2) interleaving different layers can be challenging formodels with imbalanced architectures. As an alternative, tensorparallelism partitions the model weights and distributes themto multiple devices, thus lifting the restriction on the size ofindividual layers. In this work, we focus on tensor parallelism.Manual specification of tensor parallelism is a daunting task,given that the quality of a partitioning scheme depends onthe neural network architecture and the hardware system. Toaddress this challenge, automatic parallelism approaches havebeen proposed which leverage user hints or guided searchesover the entire partitioning candidate space. We argue that abrute-force search of the space is unnecessary in the majorityof cases. Our research makes two key observations: Firstly,most neural networks include shared subgraphs that can sig-nificantly reduce the search space. Secondly, communicationis the primary bottleneck during tensor parallelism training,and contiguous partitions in a block cannot overlap. Therefore,the search process can be accelerated by only searching forunique neural network sub-modules and evaluating candidatestrategies based on their communication cost.Based on those observations, we present TAP , a deeplearning framework that automatically derives tensor-parallelplans for arbitrary neural networks without requiring expertannotations. TAP first constructs a skimmed DAG by removingauxiliary nodes, then it finds all of the shared subgraphsand searches for the optimal sharding schedule for each ofthem. In the end, TAP reconstructs the DAG by applyingthe found solution to the original graph. TAP drasticallyreduces the search space for tensor parallel plans, achieving20×−160×speedup compared with the state-of-the-art auto-parallel framework. Evaluations demonstrate that our approachcan also generate comparable solutions to the tensor parallelschedules designed by an expert [17].Our paper makes the following contributions:•A set of intermediate representations (IRs) of the compu-tational graph that abstract away from low-level imple-mentation details;•A graph pruning algorithm that exploits the shared sub-structure to facilitate efficient searching;•A communication-based cost model that accurately cap-tures the communication requirements for tensor-paralleltraining.II. B ACKGROUNDA. Model ParallelismModel parallelism distributes model weights onto differentdevices and synchronizes the full model through collectivecommunication [6]. Model parallelism can be further dividedinto categories, pipeline parallelism and tensor parallelism.1) Tensor Parallelism: Tensor parallelism splits the modellayer and distributes it across multiple devices, thus dispersingthe computational overhead of the layer [17], [23], [26]. Eachdevice stores only a portion of the input tensors in its local1memory. Therefore, the final result needs to be aggregatedfrom partial results through collective communication. Tensorparallelism can alleviate the challenge of training heteroge-neous models using pipeline parallelism and can achieve betterperformance.B. Automatic ParallelismAutomatic parallelism is a recent line of research on auto-matically distributing a local model from a single device tomultiple devices using the data and model parallel strategies.Existing approaches for automatic parallelism rely on userhints or brute-force searches across the entire space.1) User hint: User-hint-based automatic parallelism scalessingle-device programs to multi-device systems by incorpo-rating user annotations. For instance, GSPMD [26] infers theoperator partitioning scheme based on user annotations, whileWhale [12] allows for the inclusion of user hints for semi-autoparallelization of large models and introduces a hardware-wareload balance algorithm. However, user-hint-based automaticparallelism approaches require users to possess a deep under-standing of both the system and model, and hard-coded userhints may not be transferable when either the model or systemchanges.2) Search algorithm: Recent work has proposed fully auto-matic approaches based on search algorithms to optimize dis-tributed DNN training. For example, Tofu [25] uses a recursivesearch algorithm based on dynamic programming and DNN-specific heuristics to minimize communication for the entiredataflow graph. Flexflow [13] employs randomized search tofind the best parallel strategy in the SOAP (Sample, Operator,Attribute, and Parameter) space. Alpa [28] optimizes largeDL models through two-level optimizations: inter-operator andintra-operator. It enables inter-operator parallelism using dy-namic programming and intra-operator parallelism with integerlinear programming. Unity [24] represents parallelization andalgebraic transformations as substitutions on a unified graphrepresentation, uses a novel hierarchical search algorithm toidentify an optimized sequence of graph substitutions, andscales to large numbers of GPUs and complex DNNs.3) Challenge of exploding search space: Search-based ap-proaches face the challenge of exploding search space asmodel size scales, resulting in significant time costs. Forexample, each tensor (assuming 2D) can be partitioned inthree ways: not sharded, sharded along the first dimension(row-wise), or sharded along the second dimension (column-wise). Given a neural network G(E, V)withVweight tensors,there exists 3Vpossible sharding plans. Therefore, finding anoptimal sharding plan is an NP-hard problem.III. A PPROACHIn this section, we formulate the problem of searching for anoptimal tensor parallel schedule, followed by our observationof the common presence of shared sub-structures in a largeneural network, leading to the motivation of our design.A. Problem FormulationA neural network can be represented as a directed acyclicgraph G(E, V)comprised of Llayers. The set of vertices Vrepresents the operators, and the set of edges Erepresents thedata flow from producer to consumer operators. Operators canoptionally carry a weight tensor. During the forward pass, anedge represents an activation tensor, while in the backwardphase, it represents a gradient tensor. A layer Li∈Lis eithera layer or a cluster of operators with a similar composition.Let the physical training system be S(m, n)where mis thenumber of worker nodes, and nis the number of acceleratorsper worker node. A parallel plan pis a new graph mathemati-cally equivalent to G. The cost function, Cost(p, S), measurestraining latency for a given plan and training system. The goalis to find an optimal parallel plan p∗where:minimizepCost(p, S)subject to p(X) =G(X)∀XHow can an automated system find such a plan? Fig. 1illustrates the typical workflow of an auto-parallel system. Thesystem first reduces the search space for model splitting usingpruning techniques. Next, a search algorithm is employed togenerate one or more candidate plans for evaluation. Finally,a cost model evaluates all candidate plans and selects the onewith the lowest cost based on predefined evaluation criteria.Search SpaceSmallerSpaceSearchAlgorithmCost ModelCandidate PlansBest PlanFig. 1. General recipe of automatic model parallelism frameworks.The end-to-end duration to produce an optimal scheduleis a critical metric for an auto-parallel system. We identifythree primary factors that contribute to the overall completiontime: the size of the search space, the time complexity of thesearching algorithm, and the speed of the evaluation method.B. Challenges and ObservationsAs mentioned earlier, a major challenge faced by auto-parallel systems is the search space explosion problem. Thisexponential increase in candidate space has led to impracticalsearch time for modern large models [28] (§ V-B). This createsa dilemma: while auto-parallel systems aim to accelerate largemodel training, if the derivation step itself is too slow, it mayoffset the benefit of using an auto-parallel system.How to effectively reduce this large candidate search space?To answer this question, we studied common scaling tech-niques for popular DNN models and summarized our find-ings in Table I. We observe that these techniques can be2ScalingTechniqueTask Model # Params Shared Subgraph (SS) # of SSBy widthVision ResNet50 [11] 23M Conv 50×Vision + Language CLIP-Base [18] 63M Transformer 12×Language Model WideNet [27] 63M MoE layer 32×Vision ViT-Huge [8] 632M Transformer 32×Vision V-MoE [22] 15B MoE layer 24×By depthSpeech wav2vec 2.0 [3] 317M Conv, Transformer 7×,24×Language Model BERT [7] 340M Transformer 24×Language Model T5-Large [19] 770M Transformer 24×Language Model GPT-3 [4] 175B Transformer 96×Language Model Switch Transformer [10] 1571B MoE layer 15×TABLE ISHARED SUBGRAPHS EXIST ON MANY NEURAL NETWORK MODELS . ”C ONV ”MEANS CONVOLUTIONAL LAYER , ”M OE”MEANS MIXTURE -OF-EXPERTLAYER .grouped into two categories: scaling on the width, achievedby increasing the dimension of layers (e.g., adding moreclasses, attention heads, or convolutional channels), or scalingon the depth by increasing the number of layers. Notably, bothtechniques start with a base subgraph , a group of layers oroperators, and expand from it. For instance, large pre-trainedlanguage models like BERT [7] and T5 [19] comprise tensof transformer layers, while multi-class object classificationnetworks like ResNet-50 [11] are built from convolutionallayers.Furthermore, upon analyzing expert-designed parallelschedules ( [17], [20], [21]), we notice that parallel schedulesare predominately similar for layers of the same type . This isdue to the fact that similar layers have comparable computa-tional and memory consumption. This finding motivates us toinvestigate reusing parallel schedules discovered for identicallayers, which can reduce the search effort.IV. D ESIGN AND IMPLEMENTATIONA. OverviewFig. 2 illustrates the workflow of TAP . Given a neuralnetwork represented as a graph, TAP first converts the graphinto an intermediate representation(§ IV-B) called GraphNodeand removes auxiliary nodes. TAP then performs graph prun-ing(§ IV-C) to restrict the search space from the completegraph to the subgraphs. After pruning, TAP explores thepossible sharding opportunities using pre-defined shardingpatterns(§ IV-D) and validates the candidate plans(§ IV-E).If a valid plan is found, it is evaluated using the costmodel(§ IV-F). TAP takes the overall best plan, performsadditional communication-level optimizations, and rewrites themodel into a parallel version(§ IV-G). To use TAP , users onlyneed to specify the device mesh as shown in the examplebelow.1. Example with TAP on 2 workers each with 8 GPUsimport tensor_auto_parallel as tapmesh = [2, 8]tap.auto_parallel(tap.split(mesh))model_def()B. Intermediate RepresentationTAP defines a family of high-level Intermediate Represen-tations (IRs) to facilitate the derivation of parallel schedules.Compared to MLIR HLO [14], TAP IRs operate on a coarsergranularity while preserving the necessary information forsharding.Upon obtaining the original neural network graph, TAP firsttrims the graph by deleting the auxiliary operators (Step 1in Fig. 2). This will remove the initialization and checkpoint-related operators, which will be recovered when convertedback to a neural network graph later. As a result, the remaininggraph will consist of only computing and communicationoperators.TAP IRs consists of:a) GraphNode.: A GraphNode represents a group ofcomputing or communication operators. It can be a layer or alogical group of operators, which is the basic unit for derivingthe sharding schedule. The TAP graph is made of GraphNodewhile preserving the directed edges from the original DAG.Using the GraphNode IR, we reduce the number of nodes inthe T5-large model from 60k to 1015 weight variables.b) Sharding pattern.: A GraphNode could have multipleways of sharding. For instance, a 2D matrix weight can be spliton either dimension or replicated. TAP defines each shardingpattern using the SRC abstraction. TAP also establishes thecost of each sharding pattern based on communication cost.c) Sharding plan.: A sharding plan is a set of subgraphs(blocks of GraphNodes) with sharding patterns connectingthem.C. Pruning using Shared SubgraphIt is common for DNN models to contain shared subgraphs.If we could identify the shared subgraphs, we could prunethe search space by searching only within the subgraph. Wepropose a graph pruning algorithm to compress the searchspace into a shared structure (Step 2):In deep learning frameworks like TensorFlow [2], eachvariable is referred to by the operator that produces it. As such,variables under the same layer share the same name scopebecause they receive input from the same operator. Therefore,it is possible to cluster operators that fall under the same namescope.3InputOutputInputNeural NetworkShardingPlan ExplorerCost ModelInputParallelized Neural Network1Convert2Prune3Search4Query5RewriteOutputOutputCompute/communication opAuxiliary opIn/OutEntry pointFig. 2. Overview of the TAP system.Algorithm 1 Graph Pruning1:procedure PRUNE GRAPH (modelDef, minDuplicate )2: nodeTree ← ∅3: maxDepth ←modelDef.depth4: for all depth∈maxDepth ···1do5: nodeTree [depth ] ←longestCommonPrefix (modelDef.nodes.name )6: opCount =findSimilarBlk (nodeTree [depth ])7: ifopCount ≥minDuplicate then8: subgraphs.append (nodeTree [depth ])9: else10: break11: end if12: end for13: return subgraphs14:end procedureAlgorithm 1 starts by constructing a nodeTree , which iden-tifies and groups the GraphNodes on each level by using thelongest common prefix algorithm on the GraphNodes names(line 2-5). After that, it finds the blocks of GraphNodes witha similar composition of operators and compares the numberof operators with the minimum duplicate threshold (line 7).As the depth decreases, we will see a larger subgraph withless homogeneous compositions. Notice that multiple sharedsubgraphs may exist since a neural network may have multipleleaf nodes.D. Sharding Plan GeneratorA sharding pattern, defining the way a GraphNode canbe sharded, also serves as the directed edge between nodes.According to the SRC abstractions, the communication patternis determined once the split/replica decision is made. UnderAlgorithm 2 Derivation of Optimal Plan1:procedure DERIVE PLAN(modelDef, shardingPatterns )2: subgraphs ←PruneGraph (modelDef )3: candidatePlans ←enumerateAllPlans (subgraphs )4: validPlans ← {}5: for all p∈candidatePlans do6: validated ←PatternRouting (p)7: ifvalidated then8: validPlans.insert (p)9: end if10: end for11: bestPlan ←min(QueryCost (validPlans ))12: return bestPlan13:end procedurethe hood, the sharding patterns connect to each other like achain.After pruning, TAP proceeds to derive the optimal plan(Step 3and 4) using Algorithm 2. In the first phase, TAPenumerates all possible sharding plans given the subgraphs.TAP only needs to work on hundreds of plans thanks topruning. However, not every plan is valid because we onlyhave weekly connected subgraphs. Therefore, the candidateplans need to be validated by checking the connectivity (line5-10). Upon checking, TAP evaluates the performance of eachplan using a cost model and selects the best plan.E. Pattern RoutingIn the pattern routing step (Algorithm 3), TAP tries toassemble the weakly connected GraphNodes into a validsharding plan by checking the connectivity. This is to ensurethe success of graph rewriting (Step 5). TAP does so usingbreadth-first-search (BFS) starting from the root node, and the4Algorithm 3 Plan Validation1:procedure PATTERN ROUTING (currPlan )2: TopoSort (currPlan )3: nodesQ ←currPlan.root4: while nodesQ ̸=∅do5: currNode ←nodesQ.dequeue ()6: for all childNode ∈currNode.next ()do7: sp←lookUpShrdPatn (currNode, childNode )8: ifsp̸=∅then9: ifchildNode ==currPlan.leaf then10: return TRUE11: else12: nodeQ.enqueue (childNode )13: end if14: end if15: end for16: end while17: return FALSE18:end proceduregoal is to make sure there exists at least a connected path fromthe root to the leaf chained using the sharding patterns.One challenge is that a pair of contracting sharding patternsmay have different input and output tensors, and a consumeroperator’s input is not ready until its producer is ready. Inother words, dependencies exist between GraphNodes, but theinformation was kept in the original edges and could be lostwhen we perform pruning.To solve it, we perform a topological search for the GraphN-ode based on the readiness of the input tensor. We leveragethat neural networks can be represented using a directedacyclic graph, and reconstruct the edges based on the orderof the producer-consumer relationship. This way, TAP avoidschecking the order for every pair of GraphNodes.F . Cost ModelTo build a cost model, we first profile different tensorparallel plans to understand the bottleneck. Fig. 3 summarizesthe result. Data were collected from two nodes interconnectedby 32 Gbps Ethernet, each equipped with 8 GPUs. We observethat inter-node communication is the main bottleneck fortensor parallelism , and the best plan is not necessarily theone that splits every weight tensor , in line with [6].As the number of devices increases from 8×to16×, thedifference between communication time and computation timeis further pronounced. This is because the bottleneck hasshifted from high-speed intra-node communication (PCI-e) toslower inter-node communication (Ethernet).Furthermore, the best tensor parallel plan for 16 GPUs(16w-FFN ) only shards the weight in the feed-forward layer.We conjecture that with more tensors split instead of repli-cated, there are fewer FLOPs per device and the computationtime is lower. However, this comes at the cost of having morecommunication. In the case of training in the data center wherenodes are interconnected by Ethernet, the speed bottleneck8w-DP8w-MHA 8w-FFN8w-Megatron16w-DP16w-MHA 16w-FFN16w-Megatron02500500075001000012500Sec/iteration (ms)Time breakdown for tensor parallel plansComputationCommunicationFig. 3. Time breakdown for tensor parallel plans on T5-large model on 8 and16 GPUs (8w/16w). DP means data parallel, MHA means sharding the multi-head attention, FFN means sharding the feed-forward layer, and Megatronrefers to the tensor sharding plan described in [17].may shift from computation to communication instead. There-fore, communication cost is the main consideration when wedesign the cost model.TAP addresses these issues using an analytical cost modelbased on the tensor’s communication method, shape, and dataformat. Each sharding pattern is associated with a cost, andthe total cost is calculated by summing all pattern costs alongthe critical path.G. Graph RewritingAfter evaluating the cost of each sharding plan, TAP assem-bles the parallel plan. It does so by first restoring the originalorder of operators. Then, TAP identifies optimization opportu-nities that can be performed through gradient packing. In theend, TAP passes the resulting parallelized neural network planto the deep learning framework runtime.H. Limitation and Future WorkTo further optimize the memory consumption, TAP couldleverage other orthogonal techniques such as Auto MixedPrecision (AMP) [1], recomputation [5], and pipeline par-allelism. Since both AMP and TAP optimize on the graphrepresentation of the neural network, they can be made intodifferent passes. Also, gradient checkpointing can be used tooffload the selected GraphNode onto the main memory. TAPmay also be used with pipeline parallelism through automatic[9], [12], [15], [16] or manual placements.V. P RELIMINARY EVALUATIONA. SetupWe first evaluate the pruning algorithm and the use of Just-In-Time compilation for TAP . Then, for comparison withanother auto-parallel framework, we use Alpa version 0.7running with JAX 0.3.5. Next, we use Megatron running onPyTorch to compare against expert-engineered tensor parallel5plans. Finally, we present the training convergence runninggigantic neural networks.The evaluation was performed on Company A’s public cloudnode with 756GB main memory, 2×Intel 8163 CPUs at24 cores each, and 8×Nvidia V100 SXM2 32GB GPUs.Additionally, TAP builds on top of TensorFlow 1.12.B. End-to-End EvaluationIn this section, we compare TAP with auto-parallel frame-work Alpa on search time and performance of the discoveredplan.1) Search time.: As explained in § ??, TAP has a sub-linear time complexity, which is desirable when the models’size scales up. In the experiments with Alpa, we present theend-to-end search time with respect to model scaling, definedby the duration from the start of the experiment till the momentthat the training process begins. Due to time constraints, weshortlisted a search space of 16 plans for T5 and 5 plans forResNet, while we did not restrict the search space for TAP .100M 200M 350M 770M 1.4BNumber of Parameters0200linear scaleSearch time (minutes) - T5 ModelAlpaTAP100M 200M 350M 770M 1.4BNumber of Parameters100101102log scaleFig. 4. End-to-end search time when scaling on the number of parametersfor dense transformer model.To scale the model along the depth, we increase the numberof transformer layers for T5, an encoder-decoder transformerarchitecture for language modeling. Increasing the depth ofdense transformer models is a common practice to improveperformance. Fig. 4 shows that, with rising parameters, TAPcan still find a plausible schedule in under 15 mins, which is21× −67×faster than Alpa.To scale the model size along the width for the ResNet50model, we choose to increase the size of the classificationlayer. The original ResNet50 model has 1024 classes in theclassification layer. As we increase the dimensions for theclassification layer, the total number of parameters also scalesup. As shown in Fig. 5, TAP is two orders of magnitudefaster than Alpa in finding the optimal solution. Our systemoutperforms it by 103× −162×.We further analyze the time breakdown during the search.For example, for 24-layer T5-large (770M parameters), Alpaspent 5 mins profiling the operators and 5 mins constructing1024 10k 100k 250k 400kNumber of Classes050100linear scaleSearch time (minutes) - ResNet50 ModelAlpaTAP1024 10k 100k 250k 400kNumber of Classes100101102log scaleFig. 5. End-to-end search time when scaling on the number of parametersfor the large-scale classification model.the pipeline stages out of the operators. Instead, TAP reducesthe architecture to one transformer block and searches forshardable parameters within that only, drastically reducing thesearch space. As a result, Alpa takes 197 minutes to searchfor 16 candidate plans, while TAP requires only 6 minutes toexamine 729 candidate plans.100M 200M 300M 760MNumber of Parameters0.00.20.40.60.8Iteration time (sec)Iteration time - T5AlpaTAPFig. 6. Training time per iteration for T5 (batch size=16). The blue bandrepresents the standard derivation.2) Training speed.: We also evaluate the performance ofthe best plans produced by Alpa and TAP . We observe thatAlpa favors pipeline parallel schedules, while the optimalschedule found by TAP is similar to the Megatron-style tensorparallel schedule. Since the plans using pipeline parallelismrequire less communication, the plans from Alpa have a higherthroughput.We also observe that as the width of the model increases, theperformance of TAP plans is better and more consistent. Fig. 7shows the time to finish one iteration of training for parallelplans of ResNet50. We first observe that TAP consistently61024 10k 100k 250k 400kNumber of classes0.00.51.01.52.0Iteration time (sec)Iteration time - ResNet50AlpaTAPFig. 7. Training time per iteration for ResNet50 (batch size=1024).outperforms Alpa. Further, the variance (blue band) in plansdiscovered by Alpa shows that it struggles to find consistentlygood plans.VI. C ONCLUSIONWe present TAP, an automatic parallelism framework thatefficiently discovers tensor parallel plans for large models.Leveraging the observation that shared subgraphs widely existin neural networks, we design a pruning algorithm that effi-ciently reduces the search space with a sub-linear end-to-endcomplexity. The best plans found by TAP are comparable withthe state-of-the-art expert-engineered plans while only takingminutes to discover.REFERENCES[1] “Automatic mixed precision for deep learning,” https://developer.nvidia.com/automatic-mixed-precision.[2] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow,A. Harp, G. Irving, M. Isard, Y . Jia, R. Jozefowicz, L. Kaiser, M. Kudlur,J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah,M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker,V . Vanhoucke, V . Vasudevan, F. Viegas, O. Vinyals, P. Warden,M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng, “TensorFlow:Large-Scale Machine Learning on Heterogeneous Distributed Systems,”2016. [Online]. Available: http://arxiv.org/abs/1603.04467[3] A. Baevski, Y . Zhou, A. Mohamed, and M. Auli, “wav2vec 2.0: A frame-work for self-supervised learning of speech representations,” Advancesin Neural Information Processing Systems , vol. 33, pp. 12 449–12 460,2020.[4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al. , “Language mod-els are few-shot learners,” Advances in neural information processingsystems , vol. 33, pp. 1877–1901, 2020.[5] T. Chen, B. Xu, C. Zhang, and C. Guestrin, “Training deep nets withsublinear memory cost,” arXiv preprint arXiv:1604.06174 , 2016.[6] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V . Le, M. Z.Mao, M. A. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y . Ng,“Large scale distributed deep networks,” Tech. Rep., 2012.[7] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,”Tech. Rep., 2019. [Online]. Available: https://github.com/tensorflow/tensor2tensor[8] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai,T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al. ,“An image is worth 16x16 words: Transformers for image recognitionat scale,” arXiv preprint arXiv:2010.11929 , 2020.[9] S. Fan, Y . Rong, C. Meng, Z. Cao, S. Wang, Z. Zheng, C. Wu,G. Long, J. Yang, L. Xia, L. Diao, X. Liu, and W. Lin, “DAPPLE: Apipelined data parallel approach for training large models,” Proceedingsof the ACM SIGPLAN Symposium on Principles and Practice ofParallel Programming, PPOPP , vol. 21, pp. 431–445, 2021. [Online].Available: https://doi.org/10.1145/3437801.3441593[10] W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: Scaling totrillion parameter models with simple and efficient sparsity,” 2021.[11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learningfor image recognition,” in Proceedings of the IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition , vol.2016-Decem, 2016, pp. 770–778. [Online]. Available: http://image-net.org/challenges/LSVRC/2015/[12] X. Jia, L. Jiang, A. Wang, W. Xiao, Z. Shi, J. Zhang, X. Li, L. Chen,Y . Li, Z. Zheng, X. Liu, and W. Lin, “Whale: Efficient giant model train-ing over heterogeneous gpus,” in USENIX Annual Technical Conference .USENIX, 2022.[13] Z. Jia, M. Zaharia, and A. Aiken, “Beyond Data and ModelParallelism for Deep Neural Networks,” arXiv , 2018. [Online].Available: http://arxiv.org/abs/1807.05358[14] C. Lattner, M. Amini, U. Bondhugula, A. Cohen, A. Davis, J. Pienaar,R. Riddle, T. Shpeisman, N. Vasilache, and O. Zinenko, “MLIR:Scaling compiler infrastructure for domain specific computation,” in2021 IEEE/ACM International Symposium on Code Generation andOptimization (CGO) , 2021, pp. 2–14.[15] Z. Li, S. Zhuang, S. Guo, D. Zhuo, H. Zhang, D. Song,and I. Stoica, “TeraPipe: Token-Level Pipeline Parallelism forTraining Large-Scale Language Models,” 2021. [Online]. Available:http://arxiv.org/abs/2102.07988[16] D. Narayanan, A. Harlap, A. Phanishayee, V . Seshadri, N. R. Devanur,G. R. Ganger, P. B. Gibbons, and M. Zaharia, “Pipedream: Generalizedpipeline parallelism for DNN training,” SOSP 2019 - Proceedings ofthe 27th ACM Symposium on Operating Systems Principles , pp. 1–15,2019. [Online]. Available: https://doi.org/10.1145/3341301.3359646[17] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary,V . Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro,A. Phanishayee, and M. Zaharia, “Efficient Large-Scale LanguageModel Training on GPU Clusters Using Megatron-LM,” InternationalConference for High Performance Computing, Networking, Storage andAnalysis, SC , 2021. [Online]. Available: http://arxiv.org/abs/2104.04473[18] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal,G. Sastry, A. Askell, P. Mishkin, J. Clark et al. , “Learning transferablevisual models from natural language supervision,” in InternationalConference on Machine Learning . PMLR, 2021, pp. 8748–8763.[19] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,Y . Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learningwith a unified text-to-text transformer,” Tech. Rep., 2020. [Online].Available: http://jmlr.org/papers/v21/20-074.html.[20] S. Rajbhandari, J. Rasley, O. Ruwase, and Y . He, “Zero: Memoryoptimizations toward training trillion parameter models,” InternationalConference for High Performance Computing, Networking, Storage andAnalysis, SC , vol. 2020-Novem, 2020.[21] J. Ren, S. Rajbhandari, R. Y . Aminabadi, O. Ruwase, S. Yang,M. Zhang, D. Li, and Y . He, “ZeRO-offload: Democratizing billion-scale model training,” 2021 USENIX Annual Technical Conference , pp.551–564, 2021. [Online]. Available: https://www.deepspeed.ai/tutorials/[22] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton,A. Susano Pinto, D. Keysers, and N. Houlsby, “Scaling vision withsparse mixture of experts,” Advances in Neural Information ProcessingSystems , vol. 34, pp. 8583–8595, 2021.[23] M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper,and B. Catanzaro, “Megatron-LM: Training Multi-Billion ParameterLanguage Models Using Model Parallelism,” 2019. [Online]. Available:http://arxiv.org/abs/1909.08053[24] C. Unger, Z. Jia, W. Wu, S. Lin, M. Baines, C. Efrain, Q. Narvaez,V . Ramakrishnaiah, N. Prajapati, P. Mccormick, J. Mohd-yusof,J. Park, M. Smelyanskiy, A. Aiken, P. Mccormick, J. M.-y. Xi, andL. Dheevatsa, “Unity : Accelerating DNN Training Through JointOptimization of Algebraic Transformations and Parallelization This7paper is included in the Proceedings of the,” 2022. [Online]. Available:https://www.usenix.org/conference/osdi22/presentation/unger[25] M. Wang, C. c. Huang, and J. Li, “Supporting very large models usingautomatic dataflow graph partitioning,” Proceedings of the 14th EuroSysConference 2019 , 2019.[26] Y . Xu, H. Lee, D. Chen, B. Hechtman, Y . Huang, R. Joshi, M. Krikun,D. Lepikhin, A. Ly, M. Maggioni, R. Pang, N. Shazeer, S. Wang,T. Wang, Y . Wu, and Z. Chen, “GSPMD: General and ScalableParallelization for ML Computation Graphs,” 2021. [Online]. Available:http://arxiv.org/abs/2105.04663[27] F. Xue, Z. Shi, F. Wei, Y . Lou, Y . Liu, and Y . You, “Go widerinstead of deeper,” in Proceedings of the AAAI Conference on ArtificialIntelligence , vol. 36, no. 8, 2022, pp. 8779–8787.[28] L. Zheng, Z. Li, H. Zhang, Y . Zhuang, Z. Chen, Y . Huang, Y . Wang,Y . Xu, D. Zhuo, J. E. Gonzalez, and I. Stoica, “Alpa: AutomatingInter- and Intra-Operator Parallelism for Distributed Deep Learning,”2022. [Online]. Available: http://arxiv.org/abs/2201.120238 |
nfmfqzQ4Mwl | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023Accuracy Boosters: Epoch-Driven Mixed-MantissaBlock Floating Point for DNN TrainingSimla Burcu Harma∗, Ayan Chakraborty∗, Babak Falsafi∗, Martin Jaggi∗, Yunho Oh†,∗EcoCloud , EPFL simla.harma@epfl.ch, ayan.chakraborty@epfl.ch babak.falsafi@epfl.ch martin.jaggi@epfl.ch†ComSys , Korea University yunho oh@korea.ac.krAbstract —The unprecedented growth in DNN model complex-ity, size, and amount of training data has led to a commensurateincrease in demand for computing and a search for minimalencoding. Recent research advocates Hybrid Block FloatingPoint (HBFP) to minimize silicon provisioning in accelerators byconverting the majority of arithmetic operations in training to 8-bit fixed point. In this paper, we perform a full-scale explorationof the HBFP design space using mathematical tools to study theinterplay among various parameters and identify opportunitiesfor even smaller encodings across layers and epochs. Based on ourfindings, we propose Accuracy Boosters , an epoch-driven mixed-mantissa HBFP technique that uses 6-bit mantissas only in thelast epoch and first/last layers, and 4-bit mantissas for 99.7%ofall other arithmetic operations in training. Using analytic models,we show Accuracy Boosters enable increasing arithmetic densityfor an HBFP training accelerator by up to 21.3×comparedto FP32 and up to 4.4×compared to another SOTA formatBFloat16 , while preserving or outperforming FP32 accuracy.I. I NTRODUCTIONOver the past decade, improvements in Deep Neural Net-work (DNN) algorithms have led to unprecedented growthin model complexity and dataset size and, consequently, therequired computational resources to train DNN models. Oneof the largest DNN models (GPT-3) [2] has 175 billionparameters and requires 3.14 ×1023FLOPs to train. Withthe slowdown in Moore’s law, researchers and vendors havebegun to search for alternate ways to improve the arithmeticdensity of the underlying hardware platforms. Narrower bit-width (with lower precision) number formats [24], [25], [31],[32], [35] have emerged as a promising approach to increasearithmetic density, as well as, reduce the required operandstorage and communication bandwidth while maintaining hightraining accuracy.Recently there have been several proposals for block float-ing point [7], [20], [38], a numerical encoding that groups ablock of mantissas which rely on only fixed-point arithmeticwith a single exponent. Block floating point asymptoticallyapproaches the arithmetic density of fixed point with largerblock sizes and naturally lends itself well to mixed-precisionhardware where a block with the same number of exponent bitscan have a fixed-point datapath which is bitsliced for variousmultiples of mantissa bit encodings (e.g., the same way astoday’s CPU cores implement SIMD). While block floatingpoint has been promising in use for inference (e.g., MicrosoftFloating Point [6]), most proposals to train with block floatingpoint have either failed to reach its full potential by requiringsmall blocks and/or just fall short of reaching FP32 accuracy.One specific proposal, Hybrid Block Floating Point(HBFP) [10], uses a mixed-precision format where the dom-inant fraction of training which are the dot products, happenin block floating point (e.g., convolutions, matrix multiplica-tions), and FP32 is used for other less frequent operationsrequiring larger numerical ranges (e.g., activations, regular-izations). HBFP simultaneously offers the high accuracy offloating point and the superior hardware density of fixedpoint, delivering up to 8.5×higher throughput than FP16 with2×more compact models [11]. Prior work on HBFP onlypresented a preliminary design space analysis for power-of-two mantissa bit widths (e.g., 2,4,8bits).In this paper, we make the observation that the parameterspace for HBFP is quite rich, presenting several opportunitiesfor further improving efficiency and density in hardwareplatforms. First, custom accelerators can support non-power-of-two numerical formats, and minimizing the number of bitsimproves operand storage and communication linearly andarithmetic logic quadratically. Second, there is an interplaybetween the block size and the number of mantissa bits, allow-ing for an overall denser numerical format with smaller blockswhile maintaining accuracy. Finally, HBFP allows for mixed-mantissa block floating point encodings. Prior work studiestraining with various HBFP formats in isolation; however, thedesign space of mixed-mantissa HBFP is yet to be explored.We fully explore the parameter space of HBFP and showthe boundaries of block floating point by studying the interplaybetween the block size and the number of mantissa bits. To thebest of our knowledge, this is the first paper conducting a fulldesign space exploration for training DNNs with block floatingpoint. We show that HBFP 6(HBFP with 6bits of mantissa)is the smallest HBFP format achieving competitive accuracieswith no sensitivity to block size. Our main contribution isthe design of Accuracy Boosters, a DNN training mechanismperforming a large fraction of epochs in low precision, i.e.HBFP 4. Our method improves epoch-wise mixed-precisiontraining by introducing high precision, i.e. HBFP 6, to thetraining process only at the last epoch. Accuracy Boostersenable increasing arithmetic density by up to 21.3×comparedto FP32, and up to 4.4×compared to another SOTA formatBFloat16 , while preserving or outperforming FP32 accuracy.II. HBFP P ARAMETER SPACEHBFP is a mixed-precision DNN training technique thatuses block floating point for all dot product operations and1FP32 for the rest of the operations, enabling accurate trainingwith dense fixed-point arithmetic. We observe that HBFP isalso suitable for inference for the popular CNN and Trans-former models without any accuracy loss, in line with priorwork on inference with block floating point [6], showingthat HBFP is a versatile technique for both training andinference. Prior work on HBFP shows that the area and energyexpenditure of HBFP 8is around an order of magnitude lowerthan FP32 [11]. Exploring the parameter space of HBFP andpushing its boundaries can increase this ratio dramatically.HBFP has a rich parameter space, including the number ofmantissa bits, block size, and the number of exponent bits. Thehardware area and energy expenditure of HBFP acceleratorsare determined by the number of mantissa bits and the blocksize because the overhead of the exponent bits is negligibledue to blocking1. One of the key advantages of HBFP is thatwe can conservatively find a lower bound for the number ofexponent bits that covers all of the design space explorationfor block size and the number of mantissa bits. Therefore, wework with 10-bit exponents as in prior work [10] and explorethe HBFP design space by varying the mantissa bit widthand the block size. Once we fix the number of exponent bits,we can vary other parameters, which enables a reconfigurablemicroarchitecture and gives rise to mixed-mantissa HBFP.Smaller mantissa bit widths and larger block sizes are key toimproving block-floating-point hardware efficiency due to theincreasing fraction of fixed-point operations [6]. There is aninterplay between the number of mantissa bits and block sizes,allowing for an overall denser numerical format with smallerblocks while maintaining accuracy. This interplay is the resultof how the block floating point conversion works. Blockfloating point shares a single exponent across a block of valuesusing the exponent of the largest element. Since block floatingpoint format does not apply normalization (It is calculated as2exponent×0.mantissa instead of 2exponent×1.mantissa ),the precision within a block is highly dependent on the largestelement in that block, which decides the exponent value. Theinterval between two consecutive representable numbers iscalculated as in Equation 1.interval =2largest exponent2#of mantissa bits(1)As the number of elements sharing the same exponent in-creases, the likelihood of disparity in the magnitude of ele-ments also increases, leading to a precision loss for the smallelements in the block. As the number of mantissa bits de-creases, the model’s sensitivity to the block size increases withthe corresponding increase in the interval leading to a higherquantization error. More mantissa bits make the distributionmore resilient to the quantization error and larger block size,as each element can be represented more accurately.HBFP’s power footprint is not only a function of theHBFP parameters but also of outside factors. Mixed-precisiontraining has emerged as a popular technique to increase1Even for the block size of 4, HBFP 4with 5-bit exponent is only 1.1×more area-efficient than HBFP 4with 10-bit exponentthe fraction of leaner arithmetic formats within the trainingprocess, motivating us to explore the design space of mixed-mantissa HBFP; because HBFP provides the opportunity tofix the exponent width and vary the number of mantissa bitsacross layers and epochs.For CNN models, prior work indicates that the first convo-lution layer and the last fully connected layer have a largerimpact on the final model accuracy, and keeping these layersin high precision allows for reducing precision in the restof the layers [3], [24], [34], [39]. The first layer takes theinput images and filters the images with several convolutionkernels and returns feature maps. Thus, it is critical for thefinal model to keep the input information fully accurate andto preserve the data in the initial feature map. Similarly, forTransformers, the first layer is the input embedding layer,where input tokens are mapped to dense word vectors. Thelast layer of DNN models returns a probability distributionover the possible outcomes for the underlying DNN task. Theimportant roles of the first and last layers in DNN modelsmake it crucial to retain information better for these layers.In addition to layers, each training epoch has a differenteffect on the final model’s accuracy [13], [14]. [28] and[36] show that DNNs first learn low-frequency components,where the frequency is defined for the coordinates of theinput space. [36] also empirically show that for CNN models,the high-frequency components have higher complexities andare learned in the last epochs. In light of these findings, wehypothesize that high-frequency functional components aremore sensitive to quantization errors. Thus, higher precisionis required for the last stage of DNN training, where theoptimization occurs after an appropriate amount of general-ization in the network. After reaching a certain loss value inlow-precision training, switching the tensors to high precisionenable the sensitive fine-tuning performed in the final epochsand help increase the accuracy even more.III. M INIMIZING HBFPOur goal is to minimize HBFP to increase the hardwareefficiency of training without losing accuracy. For a blocksize of 576, even though HBFP4 incurs a 2.4×improvementin area/power relative to HBFP8, it lacks the precision toreach FP32 accuracy. As prior work on HBFP [10], [11] onlyinvestigated power-of-two mantissa bits and focused mostly onthe design space of HBFP 8, the interplay between the numberof mantissa bits and the block size is left unexplored. Whilepower-of-two-bit numbers align naturally with the memorystructure and encode matrices in a tightly-packed way, non-power-of-two-bit mantissas can improve the arithmetic densityeven further, as studied by [6] and can be easily integrated intocustom accelerators. We investigate the whole design space ofHBFP by varying both parameters and claim that reducing theblock size will enable reducing the number of mantissa bits,and thus improve hardware efficiency. In this section, we showhow to minimize HBFP step by step, give an explanation ofthe limitations of HBFP and propose a new mixed-precisionschema to minimize HBFP further.2To study the relationship between model accuracy andHBFP parameters, we measure the similarity between block-floating-point and FP32 distributions of various tensors usingWasserstein distance, mathematically defined as in Equation 2.W(P, Q) = infγ∈Π(P,Q)E(x,y)∼γ[||x−y||] (2)where Π(P, Q)is the set of all joint distributions γ(x, y)whose marginal distributions are equal to PandQ.γ(x, y)canbe interpreted as the amount of mass that must be transportedfrom xtoyto transform PtoQ[1]. Unlike KL-Divergence,which is commonly used to compare quantized tensors to theirfull-precision counterparts [6], [26], Wasserstein distance issymmetric, and thus is mathematically a metric. Moreover,because DNNs often deal with distributions where KL Diver-gence is not defined (or infinite), we need to add a noise termto the model distribution to be able to use KL Divergence,which causes disturbance in the results.We observe that the tensor distribution is preserved whenthe elements are converted to block floating point formatwith 6 bits of mantissa and wider for reasonably large blocksizes2. Figure 1 shows Wasserstein distances between FP32and HBFP 6and HBFP 4with various block sizes for theweight tensors of four different layers of ResNet20 trainedon CIFAR10. For all the tensors, HBFP 6has a much smallerdistance to FP32, and the distances are fairly close to eachother for a given tensor across all block sizes. However, theWasserstein distance of HBFP 4is more than 3.5×higher thanHBFP 6across all block sizes, and the distances dramaticallyincrease with the block size. Indeed, the R-squared (thestrength of the relationship between two data sets) valuesbetween the model accuracy and various Wasserstein distancesare around 0.99, validating the strength of our metric.16 64 576 256WassersteinDistancelayer1.2.conv2.weightlayer2.0.conv1.weightfc.weightconv1.weightHBFP6HBFP416 64 576 25600.020.030.040.01Fig. 1: Wasserstein distance between FP32 and HBFP withvarious block sizes for various layers.Even though reducing the block size incurs smaller Wasser-stein distances and helps increase the accuracy, HBFP4 stillfails to reach FP32 accuracy because it does not have enoughprecision to minimize the loss and has a high generalization er-ror. [22] introduce a methodology to visualize loss landscapes2Block sizes of up to 256already achieves more than 95% of the maximumhardware benefit for HBFP 6in order to better understand the effect of loss landscapeson generalization. Figure 2 shows log-scale loss landscapesfor various configurations, sliced along the x-axis (y= 0) forsimplicity. The center of the plot corresponds to the currentstate of the minimizer, and the axis parameterizes a randomdirection with filter-wise normalization. HBFP 4convergesto a much worse minimum compared to HBFP 6and FP32indicating poor accuracy. Although the minimum of HBFP 4is flat, it does not indicate better generalization because theminimum itself is much worse.HBFP6FP32HBFP4HBFP4+LayersAcc.Boosters1.000.000.500.501.0004224Log LossFig. 2: Loss landscapes of ResNet20 on CIFAR10 for variousconfigurations, sliced along the x-axis.Following the insights from prior work, we study the effectof the first and last layers of CNNs on model accuracy. Thedotted and solid red lines in Figure 1 show the first and lastlayers, respectively, and it is clear that these layers are themost affected by lowering the precision, especially for HBFP 4.Thus, we keep the first and last layers of CNNs in HBFP 6during HBFP 4training to increase its accuracy. However, theincrease in precision HBFP 6provides for the first and lastlayers still does not achieve enough optimization to reachFP32 accuracy. In Figure 2, the red dashed curve shows thisconfiguration, and the curve gets sharper and lower comparedto HBFP 4-only training. However, the generalization andoptimization power of the model is still unbalanced, leadingto convergence to another bad local minima.We introduce Accuracy Boosters, an epoch-driven mixed-mantissa HBFP that uses HBFP 6only in the last epoch andconverts 99.7% of all arithmetic operations in training toHBFP 4. We hypothesize that using HBFP 6for the last epoch issufficient to boost the accuracy, while the rest of the epochs aretrained using HBFP 4. We leverage the insight that last epochshave more effect on the final model’s accuracy [13], [14], [28],[36]. We claim that training with 4-bit mantissas helps themodel generalize and reach a certain loss value. Afterward,switching to 6-bit mantissas helps the model optimize andfine-tune in the final epochs and increase accuracy to the FP32level. The loss landscape for Accuracy Boosters (the red solidcurve in Figure 2) supports our hypothesis. We see that thecurve gets really close (note that the plot is in log scale, thus−2is closer to −4compared to 0) to HBFP 6and FP32 curvesand finally achieves FP32 accuracy.3IV. E XPERIMENTAL RESULTSWe experiment on the state-of-the-art models and datasetsfor various DNN tasks to test our hypotheses. We trainResNet20/74 [15], and DenseNet40 [16] on CIFAR10 andCIFAR100 [19] datasets for image classification. We alsotrain a Transformer-Base [33] on the WMT16 English-Germandataset for machine translation. Models trained on CIFAR10are trained for 160 epochs, whereas for CIFAR100, the totalnumber of epochs for all models is 300. The transformeris trained for 70epochs. We use FP32 as the baseline forboth model accuracies and hardware comparisons. For theimage classification experiments, we report the Top1 validationaccuracies; for machine translation, we report the BLEUscores. Moreover, to show the impact of our method, we tunethe hyperparameters of FP32 models and then train the samemodels from scratch with the same hyperparameters in HBFP,showing that our method can be used out of the box withoutfurther hyperparameter tuning.We use an analytic model to estimate the area of the mostbasic operation in DNN training—Dot product followed byactivation unit for different encodings. Fixing the operation en-ables us to compare the arithmetic density ((operations/s)/area)solely on the amount of area. Thus, we define the gainin arithmetic density to be the same as the gain in thearea. For FP32 dot product units of size N, we estimatethe hardware cost as the sum of the cost of N−1FP32adders, NFP32 multipliers, one FP32 accumulator (adder),and one floating-point activation unit. For HBFP dot productunits, we estimate the hardware cost as the sum of the costofN−1fixed-point adders, Nfixed-point multipliers, oneFP32 accumulator (adder), one floating-point activation unit,and one adder for signed exponents. We also add the costsof conversions between FP32 and fixed-point numbers bymodeling the converter blocks.A. Minimizing HBFPTable I shows the Top1 validation accuracies for ResNet20,ResNet74, and DenseNet40 on CIFAR10 and CIFAR100datasets trained with various HBFP configurations. We observethat HBFP 6is the smallest HBFP configuration that givesaccuracies within 2%of FP32 accuracy for block sizes upto256. Larger blocks will contain a larger variety of values interms of magnitude (affected e.g., by outliers), so it will resultin larger approximation errors than smaller blocks and loweraccuracy in training.We also report HBFP 4accuracies to show the limitationsof HBFP. Even for the small models like ResNet20, with ablock size of 16, the accuracy drops more than 9%. As theaccuracy drop for ResNet74 and DenseNet40 on CIFAR100is considerably high even with HBFP 5(not shown here forcompactness), we did not train these models with HBFP 4.We observe that for HBFP 4, the sensitivity to the block sizeincreases for all the models because the distortions in thetensor distributions increase (see Section II).TABLE I: Top-1 validation accuracies of various CNN modelsfor various HBFP configurationsModels and DatasetsCIFAR10 CIFAR100NumberFormatBlock/AreaResNet20 ResNet74 ResNet74 DenseNet40FP32 - 91.72 93 .57 74 .55 72 .42HBFP8 576/ 10.0 91 .52 93 .36 74 .32 73 .73HBFP 616/11.2 91 .12 93 .38 73 .51 72 .0825/12.3 91 .09 92 .54 73 .20 71 .7736/13.1 91 .29 92 .61 72 .87 71 .8349/13.6 91 .33 92 .93 72 .40 71 .8764/13.9 91 .12 92 .93 72 .40 71 .81256/14.8 91 .38 92 .79 72 .53 71 .50576/15.0 90 .65 92 .19 72 .51 71 .02HBFP 416/15.5 82 .59 76 .85 - 63.7025/17.8 81 .82 78 .62 - 64.2536/19.3 80 .84 76 .64 - 63.3449/20.4 79 .32 71 .19 - 65.5564/21.3 80 .18 74 .35 - 62.37256/23.4 76 .96 60 .65 - 60.02576/23.9 75 .33 66 .70 - 59.77Total Number ofFLOPs required totrain the model41M 174M 326M 542MB. Accuracy BoostersConsidering HBFP hardware model, a block size of 64iswithin 90% of the maximum area/power gain while achievingaccuracies with less than 1%degradation for HBFP 6. Thus,we choose block size of 64as the sweet spot and test AccuracyBoosters using this block size. We perform the last epochof the training in HBFP 6and the rest in HBFP 4for all theexperimental settings. We also trained by keeping the last 10epochs in HBFP 6to observe the improvement in accuracy forthe CNN models. We keep all CNN models’ first and lastlayers in HBFP 6. The first and last layers of the CNN modelsaccount for a negligible amount of computation; thus, keepingthem in slightly higher precision during HBFP training doesnot result in a significant increase in the hardware area orenergy consumption. We can see that for most of the CNNmodels, Accuracy Boosters outperforms FP32. When we keepthe last 10epochs in HBFP 6, we observe that the accuraciesslightly increase (see Table II).TABLE II: Top-1 validation accuracies of various CNN modelsfor Accuracy BoostersModels and DatasetsCIFAR10 CIFAR100Epochs usingHBFP6ResNet20 ResNet74 ResNet74 DenseNet40Only last 91.24 92 .62 73 .74 73.61Last 10 91.36 93 .02 74 .28 74.10FP32 91.72 93 .57 74 .55 72 .42Table III shows the results of applying Accuracy Boosters to4the Transformer. We observe that for the Transformer, HBFP6performs similarly to FP32. While standalone HBFP4 does notincur a significant accuracy loss, Accuracy Boosters still helpfurther bridge the gap to FP32 and even outperform it.TABLE III: BLEU Scores for Transformer-Base trained onIWSLT’14 De →En task for various encodingsFP32 HBFP 6 HBFP 4 BoosterBLEU Score 34.77 34 .47 32 .64 36 .08We observe that mixed-mantissa training using AccuracyBoosters can be carried out on arithmetic units designed forHBFP 4. The small fraction of total training operations thatuse HBFP 6can be bit-sliced to fit on the 4-bit arithmeticunits, similar to techniques proposed in prior work [38],while maintaining the same throughput. Thus, we claim thearithmetic density of a hardware accelerator using AccuracyBoosters will be approximately equal to the arithmetic densityof HBFP 4.In conclusion, Accuracy Boosters offers up to 21.3×higherarithmetic density compared to FP32 by using only 4bitsfor99.7% of total training computations while achievingcomparable or better accuracy. Our analytic model estimatesanother state-of-the-art reduced precision format—BFloat16only offers 4.9×higher arithmetic density compared to FP32.Hence, the much superior arithmetic density of HBFP 4enablesAccuracy Boosters to offer a further 4.4×higher arithmeticdensity compared to BFloat16 . Apart from arithmetic density,4-bit mantissas promise significant memory savings, but theexact value depends on the layout and scheme and is outsidethe scope of this work.V. R ELATED WORKIn recent years, there has been a significant amount ofresearch on inference and training [4], [5], [8], [17], [21],[23], [29], [39] with narrow numerical representations. GoogleBrain’s bfloat16 [35], NVIDIA’s mixed-precision trainingwith FP16 [25], and another mixed-precision scheme usingFP8 [31] are the most commonly-used ones. Recent researchadvocates the use of Block Floating-Point for DNN train-ing [11] and inference [6]. Flexpoint [20] and Dynamic Fixed-Point [7] propose block-floating-point formats for training witha 16-bit mantissa and a shared exponent. Prior work proposeda novel format for training DNNs with BFP, called HybridBlock Floating-Point (HBFP) [10]. In this paper, we arguethat reducing the mantissa bit width in HBFP significantlyimproves silicon efficiency while designing hardware for DNNtraining.Many have proposed techniques to compensate for the dataloss introduced by narrower numerical representations [12],[24], [31], [32]. Mixed-precision training has emerged as apopular technique to recover the information loss caused byquantization. Several techniques vary the precision layer-wiseby using higher precision arithmetic for layers with greatersignificance [18], [30], [37]. Specifically, [3], [24], [34], [39]use FP32 for the first and last layers. [13] employ fixed-point arithmetic with different bit widths epoch-wise over thecourse of training. Combining the layer-wise and epoch-wiseapproaches, [14], [27], [38] vary the precision adaptively perepoch and layer at the same time using control mechanisms.While all the aforementioned studies employ leaner arithmeticfor a fraction of the training process, they fail to make leanerarithmetic the common case of the training process.Recent work [9] suggests that during mixed-precision FP16training, the optimizer states can be reduced to 8bits byusing a block-wise quantization method. This observation is inline with our work that applies quantization by extracting thelargest exponent per block. Similarly, FAST [38] uses a block-floating-point-based layer-wise mixed precision approach us-ing2and4-bit mantissa. Unlike our work, FAST requiresfine-tuning several additional hyperparameters for its trainingalgorithm, making it difficult to apply to other DNN models.Another block-floating-point-based work, FlexBlock [27], uses4and8-bit mantissa with various block sizes and also needshigher-precision block-floating-point formats only for weightgradient calculations that suffer more from quantization errors.VI. C ONCLUSIONSeveral low-precision training techniques and specializednumerical formats have been introduced over the past decadeto increase the arithmetic density of the DNN accelerators.One such format, Hybrid Block Floating-Point (HBFP), whichallows for a majority of the DNN’s arithmetic operations (i.e.,dot products) to be performed using fixed-point arithmetic hasbeen shown to achieve FP32 accuracy with 8-bit mantissa.However, a smaller number of mantissa bits allow for excep-tional improvements in arithmetic density. In this paper, weperform a full-scale exploration of the HBFP design spacefor emerging models and datasets. We show that HBFP 6isthe smallest HBFP format achieving FP32 accuracy for allblock sizes. We propose the Accuracy Boosters technique tobring HBFP 4into training, using HBFP 6in the last epoch,leveraging the insight that each epoch has a different effecton training. We show that the last stage of training requiresmore precision than the rest. We use an analytic model toshow that our method achieves up to 21.3×higher arithmeticdensity over FP32 and 4.4×higher density over BFloat16 ,while maintaining or outperforming FP32 accuracy.ACKNOWLEDGEMENTSThe authors thank the anonymous reviewers and the mem-bers of PARSA at EPFL for their precious comments andfeedback. We would also like to thank Nicholas Sperry forhis contributions to the loss landscape experiments. This workhas been partially funded by a Microsoft PhD Fellowship, andthe following grant: ”Unified Accelerators for Post-Moore Ma-chine Learning” from the Swiss National Science Foundation(SNSF).5REFERENCES[1] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” CoRR ,vol. abs/1701.07875, 2017. [Online]. Available: http://arxiv.org/abs/1701.07875[2] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler,J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,and D. Amodei, “Language models are few-shot learners,” in Advancesin Neural Information Processing Systems 33: Annual Conference onNeural Information Processing Systems 2020, NeurIPS 2020, December6-12, 2020, virtual , H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan,and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html[3] J. Choi, Z. Wang, S. Venkataramani, P. I. Chuang, V . Srinivasan,and K. Gopalakrishnan, “PACT: parameterized clipping activationfor quantized neural networks,” CoRR , vol. abs/1805.06085, 2018.[Online]. Available: http://arxiv.org/abs/1805.06085[4] M. Courbariaux, Y . Bengio, and J. David, “Binaryconnect: Trainingdeep neural networks with binary weights during propagations,” inAdvances in Neural Information Processing Systems 28: AnnualConference on Neural Information Processing Systems 2015, December7-12, 2015, Montreal, Quebec, Canada , C. Cortes, N. D. Lawrence,D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 3123–3131. [Online]. Available: https://proceedings.neurips.cc/paper/2015/hash/3e15cc11f979ed25912dff5b0669f2cd-Abstract.html[5] M. Courbariaux, Y . Bengio, and J. David, “Low precision arithmeticfor deep learning,” in 3rd International Conference on LearningRepresentations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015,Workshop Track Proceedings , Y . Bengio and Y . LeCun, Eds., 2015.[Online]. Available: http://arxiv.org/abs/1412.7024[6] B. Darvish Rouhani, D. Lo, R. Zhao, M. Liu, J. Fowers, K. Ovtcharov,A. Vinogradsky, S. Massengill, L. Yang, R. Bittner, A. Forin, H. Zhu,T. Na, P. Patel, S. Che, L. Chand Koppaka, X. SONG, S. Som,K. Das, S. T, S. Reinhardt, S. Lanka, E. Chung, and D. Burger,“Pushing the limits of narrow precision inferencing at cloud scalewith microsoft floating point,” in Advances in Neural InformationProcessing Systems , H. Larochelle, M. Ranzato, R. Hadsell, M. F.Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020,pp. 10 271–10 281. [Online]. Available: https://proceedings.neurips.cc/paper/2020/file/747e32ab0fea7fbd2ad9ec03daa3f840-Paper.pdf[7] D. Das, N. Mellempudi, D. Mudigere, D. D. Kalamkar, S. Avancha,K. Banerjee, S. Sridharan, K. Vaidyanathan, B. Kaul, E. Georganas,A. Heinecke, P. Dubey, J. Corbal, N. Shustrov, R. Dubtsov, E. Fomenko,and V . O. Pirogov, “Mixed precision training of convolutional neuralnetworks using integer operations,” in 6th International Conference onLearning Representations, ICLR 2018, Vancouver, BC, Canada, April30 - May 3, 2018, Conference Track Proceedings . OpenReview.net,2018. [Online]. Available: https://openreview.net/forum?id=H135uzZ0-[8] T. Dettmers, M. Lewis, Y . Belkada, and L. Zettlemoyer, “Llm.int8():8-bit matrix multiplication for transformers at scale,” CoRR , vol.abs/2208.07339, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2208.07339[9] T. Dettmers, M. Lewis, S. Shleifer, and L. Zettlemoyer, “8-bitoptimizers via block-wise quantization,” in The Tenth InternationalConference on Learning Representations, ICLR 2022, Virtual Event,April 25-29, 2022 . OpenReview.net, 2022. [Online]. Available:https://openreview.net/forum?id=shpkpVXzo3h[10] M. Drumond, T. Lin, M. Jaggi, and B. Falsafi, “Training DNNs withHybrid Block Floating Point,” arXiv:1804.01526 [cs, stat] , Dec. 2018,arXiv: 1804.01526. [Online]. Available: http://arxiv.org/abs/1804.01526[11] M. P. Drumond, “Coltrain: Co-located dnn training and inference,” p.115, 2020. [Online]. Available: http://infoscience.epfl.ch/record/280118[12] S. Fox, S. Rasoulinezhad, J. Faraone, D. Boland, and P. H. W. Leong,“A block minifloat representation for training deep neural networks,”in9th International Conference on Learning Representations, ICLR2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021.[Online]. Available: https://openreview.net/forum?id=6zaTwpNSsQ2[13] Y . Fu, H. Guo, M. Li, X. Yang, Y . Ding, V . Chandra, and Y . Lin,“CPT: efficient deep neural network training via cyclic precision,”in9th International Conference on Learning Representations, ICLR2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net, 2021.[Online]. Available: https://openreview.net/forum?id=87ZwsaQNHPZ[14] Y . Fu, H. You, Y . Zhao, Y . Wang, C. Li, K. Gopalakrishnan,Z. Wang, and Y . Lin, “Fractrain: Fractionally squeezing bit savingsboth temporally and spatially for efficient DNN training,” in Advancesin Neural Information Processing Systems 33: Annual Conference onNeural Information Processing Systems 2020, NeurIPS 2020, December6-12, 2020, virtual , H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan,and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/8dc5983b8c4ef1d8fcd5f325f9a65511-Abstract.html[15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning forimage recognition,” in 2016 IEEE Conference on Computer Visionand Pattern Recognition, CVPR 2016, Las Vegas, NV , USA, June27-30, 2016 . IEEE Computer Society, 2016, pp. 770–778. [Online].Available: https://doi.org/10.1109/CVPR.2016.90[16] G. Huang, Z. Liu, and K. Q. Weinberger, “Densely connectedconvolutional networks,” CoRR , vol. abs/1608.06993, 2016. [Online].Available: http://arxiv.org/abs/1608.06993[17] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, andY . Bengio, “Binarized neural networks,” in Advances in NeuralInformation Processing Systems 29: Annual Conference onNeural Information Processing Systems 2016, December 5-10,2016, Barcelona, Spain , D. D. Lee, M. Sugiyama, U. vonLuxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 4107–4115. [Online]. Available: https://proceedings.neurips.cc/paper/2016/hash/d8330f857a17c53d217014ee776bfd50-Abstract.html[18] S. Khoram and J. Li, “Adaptive quantization of neural networks,” p. 13,2018.[19] A. Krizhevsky and G. Hinton, “Learning multiple layers of features fromtiny images,” University of Toronto, Toronto, Ontario, Tech. Rep. 0,2009.[20] U. K ̈oster, T. Webb, X. Wang, M. Nassar, A. K. Bansal, W. Constable,O. Elibol, S. Gray, S. Hall, L. Hornof, A. Khosrowshahi, C. Kloss,R. J. Pai, and N. Rao, “Flexpoint: An Adaptive Numerical Formatfor Efficient Training of Deep Neural Networks,” in Advances inNeural Information Processing Systems , vol. 30. Curran Associates,Inc., 2017. [Online]. Available: https://papers.nips.cc/paper/2017/hash/a0160709701140704575d499c997b6ca-Abstract.html[21] F. Li and B. Liu, “Ternary weight networks,” CoRR , vol.abs/1605.04711, 2016. [Online]. Available: http://arxiv.org/abs/1605.04711[22] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, “Visualizingthe loss landscape of neural nets,” in Advances in Neural InformationProcessing Systems 31: Annual Conference on Neural InformationProcessing Systems 2018, NeurIPS 2018, December 3-8, 2018,Montr ́eal, Canada , S. Bengio, H. M. Wallach, H. Larochelle,K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., 2018, pp.6391–6401. [Online]. Available: https://proceedings.neurips.cc/paper/2018/hash/a41b3bb3e6b050b6c9067c67f663b915-Abstract.html[23] D. D. Lin, S. S. Talathi, and V . S. Annapureddy, “Fixed pointquantization of deep convolutional networks,” in Proceedings of the33nd International Conference on Machine Learning, ICML 2016,New York City, NY, USA, June 19-24, 2016 , ser. JMLR Workshopand Conference Proceedings, M. Balcan and K. Q. Weinberger,Eds., vol. 48. JMLR.org, 2016, pp. 2849–2858. [Online]. Available:http://proceedings.mlr.press/v48/linb16.html[24] N. Mellempudi, S. Srinivasan, D. Das, and B. Kaul, “MixedPrecision Training With 8-bit Floating Point,” arXiv:1905.12334[cs, stat] , May 2019, arXiv: 1905.12334. [Online]. Available:http://arxiv.org/abs/1905.12334[25] P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia,B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu,“Mixed Precision Training,” arXiv:1710.03740 [cs, stat] , Feb. 2018,arXiv: 1710.03740. [Online]. Available: http://arxiv.org/abs/1710.03740[26] S. Migacz, “8-bit Inference with TensorRT,” May 2017. [Online]. Avail-able: https://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf[27] S.-H. Noh, J. Koo, S. Lee, J. Park, and J. Kung, “Flexblock: A flexiblednn training accelerator with multi-mode block floating point support,”2022. [Online]. Available: https://arxiv.org/abs/2203.06673[28] N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A.Hamprecht, Y . Bengio, and A. C. Courville, “On the spectral bias ofneural networks,” in Proceedings of the 36th International Conferenceon Machine Learning, ICML 2019, 9-15 June 2019, Long Beach,6California, USA , ser. Proceedings of Machine Learning Research,K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 2019,pp. 5301–5310. [Online]. Available: http://proceedings.mlr.press/v97/rahaman19a.html[29] M. Rastegari, V . Ordonez, J. Redmon, and A. Farhadi, “Xnor-net:Imagenet classification using binary convolutional neural networks,”inComputer Vision - ECCV 2016 - 14th European Conference,Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, PartIV, ser. Lecture Notes in Computer Science, B. Leibe, J. Matas,N. Sebe, and M. Welling, Eds., vol. 9908. Springer, 2016, pp. 525–542.[Online]. Available: https://doi.org/10.1007/978-3-319-46493-0 32[30] J. Shen, Y . Wang, P. Xu, Y . Fu, Z. Wang, and Y . Lin, “Fractionalskipping: Towards finer-grained dynamic CNN inference,” in TheThirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020,The Thirty-Second Innovative Applications of Artificial IntelligenceConference, IAAI 2020, The Tenth AAAI Symposium on EducationalAdvances in Artificial Intelligence, EAAI 2020, New York, NY, USA,February 7-12, 2020 . AAAI Press, 2020, pp. 5700–5708. [Online].Available: https://ojs.aaai.org/index.php/AAAI/article/view/6025[31] X. Sun, J. Choi, C.-Y . Chen, N. Wang, S. Venkataramani,V . V . Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan,“Hybrid 8-bit Floating Point (HFP8) Training and Inference forDeep Neural Networks,” in Advances in Neural InformationProcessing Systems , vol. 32. Curran Associates, Inc., 2019.[Online]. Available: https://proceedings.neurips.cc/paper/2019/hash/65fc9fb4897a89789352e211ca2d398f-Abstract.html[32] X. Sun, N. Wang, C.-Y . Chen, J. Ni, A. Agrawal, X. Cui,S. Venkataramani, K. El Maghraoui, V . V . Srinivasan, andK. Gopalakrishnan, “Ultra-Low Precision 4-bit Training of DeepNeural Networks,” in Advances in Neural Information ProcessingSystems , H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan,and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp.1796–1807. [Online]. Available: https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf[33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,”inAdvances in Neural Information Processing Systems 30: AnnualConference on Neural Information Processing Systems 2017, December4-9, 2017, Long Beach, CA, USA , I. Guyon, U. von Luxburg, S. Bengio,H. M. Wallach, R. Fergus, S. V . N. Vishwanathan, and R. Garnett, Eds.,2017, pp. 5998–6008. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html[34] N. Wang, J. Choi, D. Brand, C.-Y . Chen, and K. Gopalakrishnan,“Training deep neural networks with 8-bit floating point numbers,” inProceedings of the 32nd International Conference on Neural InformationProcessing Systems , ser. NIPS’18. Red Hook, NY , USA: CurranAssociates Inc., 2018, p. 7686–7695.[35] S. Wang and P. Kanwar, “BFloat16: The secret to high performance onCloud TPUs,” Aug. 2019.[36] Z. J. Xu, Y . Zhang, T. Luo, Y . Xiao, and Z. Ma, “Frequency principle:Fourier analysis sheds light on deep neural networks,” CoRR , vol.abs/1901.06523, 2019. [Online]. Available: http://arxiv.org/abs/1901.06523[37] L. Yang and Q. Jin, “Fracbits: Mixed precision quantization viafractional bit-widths,” in Thirty-Fifth AAAI Conference on ArtificialIntelligence, AAAI 2021, Thirty-Third Conference on InnovativeApplications of Artificial Intelligence, IAAI 2021, The EleventhSymposium on Educational Advances in Artificial Intelligence, EAAI2021, Virtual Event, February 2-9, 2021 . AAAI Press, 2021,pp. 10 612–10 620. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/17269[38] S. Q. Zhang, B. McDanel, and H. T. Kung, “FAST: DNN TrainingUnder Variable Precision Block Floating Point with StochasticRounding,” arXiv:2110.15456 [cs] , Oct. 2021, arXiv: 2110.15456.[Online]. Available: http://arxiv.org/abs/2110.15456[39] S. Zhou, Y . Wu, Z. Ni, X. Zhou, H. Wen, and Y . Zou, “DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks withLow Bitwidth Gradients,” arXiv:1606.06160 [cs] , Feb. 2018, arXiv:1606.06160.7 |
ymfPxccNUZ | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023Towards A Reconfigurable Systolic Array withMulti-Level Packing for TransformersTiandong Zhao∗, Siyuan Miao∗, Jialin Cao†, Shaoqiang Lu§, Jun Qiu‡, Xiao Shi‡, Kun Wang†, Lei He∗§∗University of California, Los Angeles. zhaotiandong@ucla.edu†Fudan University, China. wangk@fudan.edu.cn‡Southeast University, China. xshi@seu.edu.cn§Eastern Institute of Technology, China. he@eias.ac.cnAbstract —Transformer-based models has achieved remarkablesuccess in extensive tasks for natural language processing. Tohandle the variable-length sentences in human language, priorworks suffer from low hardware efficiency due to either the shapemismatch between fixed-shape PEs (processing elements) andvariable-shape workloads with data parallelism or large bubbleswith pipeline parallelism. This ongoing work proposes a hybridparallelism mixed with data parallelism for linear operators andpipeline parallelism for the attention. We develop a reconfig-urable systolic array with multi-level packing to improve hard-ware efficiency. First, linear operators for different inputs canbe packed along the array columns to improve spatial efficiency.Meanwhile, to boost temporal efficiency, we develop a head-level pipeline for attention with different stages packed on thearray. We further skip the redundant computation in the maskedattention by packing the computation of two heads along time.Packing decisions are explored with a dynamic programmingbased algorithm to maximize the overall throughput. Applied toGPT, our FPGA design has achieved 1.16×higher normalizedthroughput and 1.94×better runtime MAC utilization over thestate-of-the-art GPU performance for variable-length sequencesfrom MRPC, RTE and SQuADv2 datasets.I. I NTRODUCTIONTransformer-based models have achieved remarkable tri-umphs in a wide range of deep learning tasks for naturallanguage processing, such as machine translation [21], textclassification [4] and generation [18], [19]. The extensivesuccess is attributed to the task-agnostic model architecturewith increasing number of encoder and decoder layers, andvocabulary size for better quality on various tasks. Such atrend, along with the unlimited text length that these languagemodels need to handle, results in huge amounts of compu-tation and parameters. The full GPT-3 [2] holds 175 billionparameters and requires 3.14×1023floating-point operations(FLOPS) for training, which would cost over $4.6M usinga Tesla V100 cloud instance for a single training run. Theprogressively higher computational demand calls for the needto exploit the efficiency of these models on devices.The acceleration approaches in prior works fall into twoparadigms. One approach exploits the intra-operator dataparallelism (Fig.1a) in an operator-by-operator basis. [16]optimizes the operator partitioning for Transformer inferenceon TPUv4 [9]. [7], [10] boost GPU performance with tensorcores, while [3], [23] lay focus on the memory optimiza-tions on GPU. [24]–[26], [29] develop specialized processingFig. 1. Execution timeline for different parallel approaches. (a)Intra-operatordata parallelism. (b)Sub-layer pipeline parallelism. (c) Hybrid parallelism. Thetime length of each block is only for illustration.elements (PEs) for different operators. Padding arises fromvariable-length inputs in a batch due to the fact that populardeep learning frameworks [1], [14] can only handle rectangularshapes. The padded zeros thus introduce excessive overheadin both computation and memory. [6], [27] reduce the paddingredundancy by reordering inputs during pre-processing, while[28] eliminates padding for linear blocks and fused attentionon GPU by offsetting the variable-length inputs in memory.These works suffer from low efficiency either spatially fromthe mismatch between fixed-shape PEs and variable-shapeworkload, or temporally from non-overlapped memory accesslatency, especially in data-parallel fused attention.Another approach resorts to the inter-operator pipeline par-allelism (Fig.1b) where consecutive operators are assigned todifferent PEs. [5], [8], [12], [13] accelerate training of deeplearning models with micro-batch layer pipeline. [15] con-structs a sequence-wise sub-layer pipeline with approximatedattention and feed-forward network. Since the computation ofattention and linear blocks are at least linear complexity with1regard to the variable input length, pipeline parallelism at sub-layer level could inevitably result in severe pipeline bubblesfor large length variance across input sequences.We have two key observations on the GPU performanceof Transformer-based models with intra-operator data paral-lelism. First, the attention suffers from both low temporal andspatial efficiency even for a fused kernel, indicating that theattention could potentially benefit more from inter-operatorpipeline parallelism rather than intra-operator data parallelism.Second, the highly-optimized linear blocks only obtain 70%temporal efficiency and 25% spatial efficiency. Besides theshape mismatch, it is also limited by the capacity of sharedmemory and registers due to the fact that data parallelism leadsto data replication, meaning less data reuse.To address the above problems, we propose a hybrid paral-lelism (Fig.1c), data parallelism for linear blocks and pipelineparallelism for attention. The latter is finer-grained than sub-layer level to reduce pipeline bubbles. However, challengesarise in the architecture support for the hybrid parallelism.On one hand, inter-operator pipeline parallelism needs to splitthe PEs for pipeline stages. The shape mismatch can also bealleviated with the split along with workload decomposition.On the other hand, intra-operator data parallelism needs tounify the PEs and registers to maximize the data reuse. To meetthe both requirements, we propose a runtime reconfigurablesystolic array (RSA), where PEs across columns can workeither together for a single operator or separately for multipleoperators. Specifically, the RSA can be split for different inputtokens in linear operators, or for different pipeline stages inthe attention. We use column packing for this column-wisereconfigurable working pattern. Moreover, the masking in thedecoder, which preserves the auto-regressive property to pre-vent leftward information in the flow, brings 50% redundancyin attention, especially for long sequences, but is neglectedin prior works. We further propose mask packing to skip theredundant computation between two heads assigned to thesame RSA columns.Our contributions are summarized as follows:•We develop a reconfigurable systolic array for hybridparallelism, data parallelism for linear blocks and pipelineparallelism for attention, to improve the hardware effi-ciency of Transformer-based models.•We propose a two-level packing, column packing andmask packing, to boost efficiency spatially and temporallyfor variable-length inputs. Packing decisions are exploredwith a dynamic programming based algorithm to maxi-mize the overall throughput.•Applied to GPT, our design on U200 FPGA shows1.16×higher normalized throughput and 1.94×betterruntime MAC utilization over the state-of-the-art GPUperformance for variable-length input sequences fromMRPC, RTE and SQuADv2 datasets.In the following sections, we will first describe details ofcolumn packing and mask packing in Section II and thenpropose RSA architecture in Section III. Then we will explorethe column packing decisions for hybrid parallelism in SectionIV.Section V and Section VI present experiment results andconclusions.II. M ETHODA. Column Packing1) Pack Linear Blocks: We exploit the intra-operator dataparallelism for each linear operator, namely a M×N×Kma-trix multiplication(MM), where M,N,Kstand for input rows,output columns and hidden size. We have some observationson the MM shapes in a Transformer-based model. For avariable-length input, Mis equal to input sequence lengthL. Whether NandKare variable varies across differentMMs. In the first case, which is also the most common casein the linear blocks of Transformer-based models, NandKare fixed as a multiple of head size dh. The second caseincludes the two MMs in the attention with variable NorK, where the shapes are L×L×dhandL×dh×L. The shapemismatch between fixed-shape PEs and variable-shape MMsleads to low efficiency. Rather than suffering from multi-dimensional shape mismatch between PE and MM, we mapthe fixed shapes to the RSA rows and variable shapes tothe RSA columns and temporal dimension so that the shapemismatch can be maximally alleviated by column packing. Tobe more specific, for a MM with variable-length inputs, wepack Nfrom different input sequences along RSA columnsto maximize the spatial efficiency. We also take advantageof split-k, as described in [10], to partially unroll the Kdimension to balance the parallel workloads along columnsfor temporal efficiency.2) Pack Attention: We propose a coarse-grained head-levelpipeline for the attention with six stages, including KQ load,MMKQT,Vload, softmax, MM SVTand final save. Thetwo MMs are packed along RSA columns during the pipeline,where the former has variable Nand the latter has variable K.Since two different variable dimensions are mapped to RSAcolumns, weight stationary and output stationary dataflow arerespectively required. Moreover, the number of heads to runper stage is worth study. More heads to pack in a MM stageleads to better spatial efficiency locally within the stage, butpotentially results in worse global efficiency, since the largerpipeline granularity brings more bubbles. The pipeline stagepartition will be discussed more in Section IV.B. Mask PackingEach token only needs the computation results from itspreceding tokens in the input sequence, but do not needthose after. A Transformer decoder masks out the unnecessaryones to preserve the auto-regressive property. To eliminatethe masking redundancy in softmax (mask (KQT))VT, wepropose mask packing as in Fig.2d. Rather than applyingmasking after the full computation of two KQTs, we skipthe redundant computation and only generate a packed resultmatrix S. We use Sas packed layout for the following softmaxandSVTfor memory efficiency. So PEs need to handleKQTandSVTwith fixed and variable reduction length,2Fig. 2. (a) System diagram. (b) Circuit diagram of a RSA PE and its coupled shift registers. The input data path and buffer switch can be configuredfor packing. (c) RSA is split to halves in different dataflows for column packing. (d) Mask packing. We show an example where we skip the redundantcomputation for KQTfrom two heads.respectively, and the softmax module needs to handle vectorsin the packed layout.III. H ARDWARE ARCHITECTURE DESIGNTo provide the underlying architecture for the columnpacking and mask packing, we develop a RSA along witharbiter networks and a nonlinear vector module (NVM), asshown in Fig.2b.We develop a two-dimensional systolic array with PEs andcoupled shift registers. Fig.2b shows the circuit diagram ofa RSA PE, which has three-level reconfigurablity from thecontrol signals (gray). First, useregrow configures the RSAsplit along columns by selecting the input data path to the mul-tiplier. If it is set to 1, the multiplier will take the value storedin the row register (orange) as input via the reconfigurabledata path (blue) rather than the forwarded value from the leftPE, where the two neighboring PEs can thus work separateworkloads. Second, the coupled shift register is for inputbuffering and its buffer switch can be configured for maskpacking. Third, useforward psum configures the dataflowfor a RSA. If it is set to 1, PE uses the partial sum forwardedfrom upper PE and thus enables weight stationary dataflow.Otherwise, the accumulation will be performed locally asoutput stationary dataflow. The reconfigurable dataflow is forthe two MMs in attention with variable NandKrespectivelyso that they can be packed along RSA columns.We use two arbiter networks for the interconnection betweenRSA and on-chip buffers to meet different communicationpatterns under column packing. For a MM where Kdimensionis partially unrolled across columns, we need to collectivelyreduce the partial sums from multiple columns. For the atten-tion pipeline, the result of the first MM computed on a RSApartition is written to on-chip buffers and then fed to anotherRSA partition. These two patterns are realized by the arbiternetworks. Moreover, to handle the packed layout for maskpacking, our NVM takes advantage of a configurable reductiontree proposed in [17] for maximum and sum reduction witharbitrary length in softmax.IV. S CHEDULINGA. Column Packing for A Single OperatorWe first discuss the column packing decisions for a MMwith variable-length inputs in the shape Li×N×K.Liis thelength of the ith input in a batch. Mapping Nand partialKto the spatial column dimension and Lito the temporaldimension, we enumerate all combinations of Nvalues andKfactors to find the pair with maximal spatial efficiency.For example, we are mapping a MM with N=4 and K=8to a RSA with 16 columns and 4 rows. To maximize spatialefficiency, we unroll K=8 along 2 columns besides 4 rows.Still, only 8 columns ( N=4×2, 2 is K’s column unroll factor)are used. So we split RSA to two partitions, each holding 8columns and serving part of Lialong the time. We then splittheLito two parts to balance the workload packed alongcolumns.B. Column Packing for Attention PipelineFor the attention pipeline at head level, we aim to split headsequences H={hij}, where iis the sequence index in a batchandjis the head index, to multiple stages, while minimizingthe overall latency. Within each stage, intra-operator columnpacking in Section IV-A is applied. Mask packing is alsoapplied to each MM. We formulate the pipeline stage partitionas a dynamic programming problem. Its optimal sub-structureis listed in Eq.1. pis a bit vector, where 1means the kth headinHis packed with its last preceding head and 0means nopacking. The stage partition can be inferred from pwith simpleunion-and-find method. Column packing is constrained by theon-chip memory capacity. Mmax is the maximally allowedon-chip memory pressure and Mkis the memory pressure ofkth head. Iterating the hkinH, we find the maximal overallthroughput Twith the head packed to last preceding head toone stage or not. If hkis packed, bookkeeping p[k]=1,Mkis3on hold when exploring the column packing decision for nexthead. Otherwise, we check the packing of next head at a newstage with Mmax. The optimal stage partition will maximallyreduce the pipeline bubbles.T(k, p, M ) = max(T(k−1, p[k] = 1, M−Mk)|M > M kT(k−1, p[k] = 0, Mmax)(1)V. E XPERIMENT RESULTSA. Evaluation SettingWe implement our accelerator on Xilinx U200 FPGA withRSA in 4 rows and 1024 columns, NVM in 32 vector length,and four DDR4. The latency in cycle count for the evaluationbelow is collected through RTL-level simulation with XilinxVivado Suite.We benchmark our design with 6 different settings on inputsequence length. The first three are constructed with fixed-length inputs in length 64, 512, 2048 in batch size 8. Theother three collect variable-length inputs from MRPC, RTEand SQuADv2 [20] test sets, respectively, and then packedin batches with size 8. Three datasets have 14/40, 54/240,167/791 for average/maximal sequence length. The formertwo datasets are from GLUE [22] benchmark suite withrepresentative length in small and medium length, while thelatter covers more long length. We run these datasets on asmall GPT-3 for evaluation. The model includes 12 layers,768 embedding size, 12 heads and 64 head size.B. Performance with Step-wise OptimizationWe apply step-wise evaluation to show the effect of hybridparallelism with column packing and mask packing. Intra-operator Data Parallel runs the model on RSA with intra-operator data parallelism in operator-by-operator basis. LayerPipeline runs a two-stage layer pipeline parallelism, where theRSA is split to halves and each runs a Transformer layer fordifferent sequences. The other three applies hybrid parallelismincrementally with different packing methods.Fig. 3 shows the impact of step-wise optimization on GPT.One can find that Layer Pipeline is limited by off-chip memorybandwidth on a single device and thus performs worse forlonger sequences. The hybrid parallelism is effective in allcases with 1.17 ×higher throughput on average than intra-operator data parallelism, while column packing and maskFig. 3. GPT performance on RSA with step-wise optimization.packing bring 1.21 ×and 1.26 ×performance boost respec-tively. Column packing gets more benefits for short sequences,such as fixed 64, MRPC and RTE, with better spatial effi-ciency. Mask packing benefits more for long sequences, whichbrings additional 30% for fixed 2048, but is marginal for othercases. This is because the computation of attention growsquadratically with sequence length and the attention takes 50%of total computation for fixed 2048, while it takes less than10% for fixed 64.C. End-to-End PerformanceTable I compares our performance with other works on GPUand FPGA on GPT. Batch size 8 is used for all cases. Weevaluate GPU performance with [28], which is the state-of-the-art GPU work for variable-length inputs. [15] is optimized forvariable-length inputs on FPGA, but does not include detailedthroughput for each dataset. [11] only reports performance forfixed-length inputs, which is also the case for other FPGAworks. So we compare the performance for fixed 128 lengthinputs with [11] and that for variable-length inputs with [28]on three datasets. The throughput is normalized in MACunits and 16-bit precision. Our design outperforms GPU andFPGA works by 1.16×and2.11×on normalized throughput,respectively, across fixed-length and variable-length inputs.The advantage comes from the better efficiency due to thecolumn packing and mask packing on our RSA, which shows1.94×and1.18×better MAC efficiency over GPU and FPGAworks. This observation exhibits the advantage of our RSAarchitecture over others for Transformer-based models withvariable-length inputs.Input Sequence Fixed 128 MRPC RTE SQuADv2Platform [28] [11] Ours [28] Ours [28] Ours [28] OursDevice A100 GPU ZCU102 FPGA U200 FPGA A100 GPU U200 FPGA A100 GPU U200 FPGA A100 GPU U200 FPGAPrecision FP16 INT8 INT16 FP16 INT16 FP16 INT16 FP16 INT16Frequency(MHz) 1095 214 200 1095 200 1095 200 1095 200Tensor Core/DSP 432 3287 4160 432 4160 432 4160 432 4160Runtime Utilization0.60 0.79 0.93 0.21 0.66 0.34 0.70 0.54 0.75(FLOPS/MAC)Normalized Throughput0.16 0.09 0.19 0.11 0.13 0.10 0.14 0.14 0.15(GFLOPS/MAC)TABLE ICOMPARE END -TO-END PERFORMANCE WITH GPU AND OTHER FPGA WORKS .4VI. C ONCLUSIONWe propose a hybrid parallelism for Transformer-basedmodels with variable-length inputs, specifically data paral-lelism for linear operators and pipeline parallelism for at-tention. To make it happen, we develop a reconfigurablesystolic array with multi-level packing. First, for a singlelinear operator, we pack the computation of different inputsequences along the array columns for spatial efficiency.Second, to improve the temporal efficiency of the attentionblock, we develop a head-level pipeline with stages packedalong the array columns. Moreover, we develop mask packingto skip the redundant computation that are masked out byTransformer decoder masking. Column packing decisions areexplored with a dynamic programming based algorithm tomaximize the overall throughput. Applied to GPT, our designon Xilinx U200 FPGA outperforms state-of-the-art GPU workfor variable-length inputs by 1.16×in normalized throughputand1.94×in runtime MAC utilization across MRPC, RTEand SQuADv2 datasets.REFERENCES[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,S. Ghemawat, G. Irving, M. Isard et al. , “Tensorflow: a system for large-scale machine learning.” in Osdi , vol. 16, no. 2016. Savannah, GA,USA, 2016, pp. 265–283.[2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al. , “Language mod-els are few-shot learners,” Advances in neural information processingsystems , vol. 33, pp. 1877–1901, 2020.[3] T. Dao, D. Y . Fu, S. Ermon, A. Rudra, and C. R ́e, “Flashattention: Fastand memory-efficient exact attention with io-awareness,” arXiv preprintarXiv:2205.14135 , 2022.[4] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-trainingof deep bidirectional transformers for language understanding,” arXivpreprint arXiv:1810.04805 , 2018.[5] S. Fan, Y . Rong, C. Meng, Z. Cao, S. Wang, Z. Zheng, C. Wu, G. Long,J. Yang, L. Xia et al. , “Dapple: A pipelined data parallel approachfor training large models,” in Proceedings of the 26th ACM SIGPLANSymposium on Principles and Practice of Parallel Programming , 2021,pp. 431–445.[6] J. Fang, Y . Yu, C. Zhao, and J. Zhou, “TurboTransformers: An EfficientGPU Serving System for Transformer Models,” ser. PPoPP ’21. NewYork, NY , USA: Association for Computing Machinery, 2021, p.389–402. [Online]. Available: https://doi.org/10.1145/3437801.3441578[7] S. Feng, B. Hou, H. Jin, W. Lin, J. Shao, R. Lai, Z. Ye, L. Zheng,C. H. Yu, Y . Yu et al. , “Tensorir: An abstraction for automatic tensorizedprogram optimization,” in Proceedings of the 28th ACM InternationalConference on Architectural Support for Programming Languages andOperating Systems, Volume 2 , 2023, pp. 804–817.[8] Y . Huang, Y . Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee,J. Ngiam, Q. V . Le, Y . Wu et al. , “Gpipe: Efficient training of giant neu-ral networks using pipeline parallelism,” Advances in neural informationprocessing systems , vol. 32, 2019.[9] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa,S. Bates, S. Bhatia, N. Boden, A. Borchers et al. , “In-datacenterperformance analysis of a tensor processing unit,” in Proceedings of the44th annual international symposium on computer architecture , 2017,pp. 1–12.[10] A. Kerr, D. Merrill, J. Demouth, and J. Tran, “Cutlass: Fast linear algebrain cuda c++,” NVIDIA Developer Blog , 2017.[11] Z. Liu, G. Li, and J. Cheng, “Hardware acceleration of fully quantizedbert for efficient natural language processing,” in 2021 Design, Automa-tion & Test in Europe Conference & Exhibition (DATE) . IEEE, 2021,pp. 513–516.[12] D. Narayanan, A. Harlap, A. Phanishayee, V . Seshadri, N. R. Devanur,G. R. Ganger, P. B. Gibbons, and M. Zaharia, “Pipedream: Generalizedpipeline parallelism for dnn training,” in Proceedings of the 27th ACMSymposium on Operating Systems Principles , 2019, pp. 1–15.[13] D. Narayanan, A. Phanishayee, K. Shi, X. Chen, and M. Zaharia,“Memory-efficient pipeline-parallel dnn training,” in International Con-ference on Machine Learning . PMLR, 2021, pp. 7937–7947.[14] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al. , “Pytorch: Animperative style, high-performance deep learning library,” Advances inneural information processing systems , vol. 32, 2019.[15] H. Peng, S. Huang, S. Chen, B. Li, T. Geng, A. Li, W. Jiang, W. Wen,J. Bi, H. Liu, and C. Ding, “A Length Adaptive Algorithm-HardwareCo-Design of Transformer on FPGA through Sparse Attention andDynamic Pipelining,” in Proceedings of the 59th ACM/IEEE DesignAutomation Conference , ser. DAC ’22. New York, NY , USA:Association for Computing Machinery, 2022, p. 1135–1140. [Online].Available: https://doi.org/10.1145/3489517.3530585[16] R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Lev-skaya, J. Heek, K. Xiao, S. Agrawal, and J. Dean, “Efficiently scalingtransformer inference,” arXiv preprint arXiv:2211.05102 , 2022.[17] E. Qin, A. Samajdar, H. Kwon, V . Nadella, S. Srinivasan, D. Das,B. Kaul, and T. Krishna, “SIGMA: A Sparse and Irregular GEMM Ac-celerator with Flexible Interconnects for DNN Training,” in 2020 IEEEInternational Symposium on High Performance Computer Architecture(HPCA) , 2020, pp. 58–70.[18] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al. , “Improvinglanguage understanding by generative pre-training,” 2018.[19] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al. ,“Language models are unsupervised multitask learners,” OpenAI blog ,vol. 1, no. 8, p. 9, 2019.[20] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “Squad: 100,000+questions for machine comprehension of text,” arXiv preprintarXiv:1606.05250 , 2016.[21] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances inneural information processing systems , vol. 30, 2017.[22] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman,“GLUE: A multi-task benchmark and analysis platform for naturallanguage understanding,” arXiv preprint arXiv:1804.07461 , 2018.[23] X. Wang, Y . Xiong, X. Qian, Y . Wei, L. Li, and M. Wang, “Lightseq2:Accelerated training for transformer-based models on gpus,” arXivpreprint arXiv:2110.05722 , 2021.[24] Y . Yu, C. Wu, T. Zhao, K. Wang, and L. He, “Opu: An fpga-basedoverlay processor for convolutional neural networks,” IEEE Transactionson Very Large Scale Integration (VLSI) Systems , vol. 28, no. 1, pp. 35–47, 2019.[25] Y . Yu, T. Zhao, K. Wang, and L. He, “Light-opu: An fpga-basedoverlay processor for lightweight convolutional neural networks,” inProceedings of the 2020 ACM/SIGDA International Symposium onField-Programmable Gate Arrays , 2020, pp. 122–132.[26] Y . Yu, T. Zhao, M. Wang, K. Wang, and L. He, “Uni-opu: An fpga-baseduniform accelerator for convolutional and transposed convolutionalnetworks,” IEEE transactions on very large scale integration (VLSI)systems , vol. 28, no. 7, pp. 1545–1556, 2020.[27] J. Zeng, M. Li, Z. Wu, J. Liu, Y . Liu, D. Yu, and Y . Ma, “BoostingDistributed Training Performance of the Unpadded BERT Model,” arXivpreprint arXiv:2208.08124 , 2022.[28] Y . Zhai, C. Jiang, L. Wang, X. Jia, S. Zhang, Z. Chen, X. Liu, andY . Zhu, “ByteTransformer: A High-Performance Transformer Boostedfor Variable-Length Inputs,” 2023.[29] T. Zhao, Y . Yu, K. Wang, and L. He, “Heterogeneous dual-core overlayprocessor for light-weight cnns,” in 2021 IEEE 29th Annual Interna-tional Symposium on Field-Programmable Custom Computing Machines(FCCM) . IEEE, 2021, pp. 264–264.5 |
xd5qPRXLl7 | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023Accelerating Attention Based Models via HW-SWCo-Design using Fine-Grained Sparsification.Abhimanyu Rajeshkumar Bambhaniya∗, Amir Yazdanbakhsh†, Suvinay Subramanian‡, Tushar Krishna∗∗Georgia Institute of Technology. abambhaniya3@gatech.edu,tushar@ece.gatech.edu†Google DeepMind. ayazdan@google.com‡Google. suvinay@google.comAbstract —This paper proposes FI ne-G rained S parsification(FIGS), a novel architecture for accelerating attention-basedmodels using N:M structured sparsity. Existing hardware accel-erators focus on optimizing compute to achieve ideal processingelement (PE) utilization but ignore the implications of higherinput bandwidth. FIGS overcomes this challenge by leveragingtechniques like grouping and reusing input data to reducerequired input bandwidth, achieving high PE utilization whileminimizing on-chip interconnect area. The paper also proposesFIGS-Train, a sparsity training recipe that improves the accuracyof N:M structured sparse attention models.I. I NTRODUCTIONAttention-based models have become increasingly popularin recent years due to their ability to focus on relevantinformation while processing input data. They are particularlyeffective in natural language processing (NLP) [3], [7], [10],image recognition [25], and speech recognition, code gener-ation [4]. The attention mechanism allows these models toassign varying levels of importance to different parts of theinput data, resulting in improved accuracy and efficiency.Despite their effectiveness, attention-based models have be-come increasingly complex and computationally demanding,with some models containing billions of parameters [23].This growing size of the models has led to longer trainingand inference times, limiting their applicability in practicalsettings. Various techniques, such as parallel processing, modelcompression, and low-precision arithmetic, have been pro-posed to overcome these challenges. However, these tech-niques have their limitations and often come at the cost ofreduced accuracy [2], [5], [11]–[13], [18], [21], [22], [24],[28], [29].Sparsity is an increasingly prevalent technique for accel-erating attention-based models [6], [26] . Sparsity aims toreduce the number of parameters and computations required bythe model by identifying and removing redundant or less im-portant connections. N:M structured sparsity is a particularlyinteresting technique as it removes a fixed number of weightsfor each block of weights in the model [29]. This approach ismore hardware-friendly and easier to implement in hardwareaccelerators than other sparsification techniques, such as ran-dom or magnitude-based pruning. By using this technique, it ispossible to accelerate the execution of attention-based modelswhile maintaining high accuracy levels [1], [20], [22].Although N:M sparsity has been shown to be an effectivetechnique for accelerating attention-based models, the existingFig. 1. Growing cost of supporting higher on-chip bandwidth in 7nm.hardware accelerators, such as S2TA [21], STA [20], andVEGETA [15], focus primarily on optimizing the computeto achieve ideal processing element (PE) utilization, whileignoring the implications of higher input bandwidth. Recentresearch [27] has shown that the required input bandwidth forN:M accelerators scales up with M, which can lead to sig-nificant challenges in designing hardware that can effectivelysupport this higher bandwidth. Figure 1 shows the increasein the on-chip interconnect area as the bandwidth requiredincreases [16], [17].Thus we propose FIGS: FI ne-G rained S parsification, novelarchitectures that can improve PE efficiency without increasingthe on-chip bandwidth requirement. Our architectures leveragetechniques like grouping and reusing input data to reducethe required input bandwidth. By doing so, they can achievehigh PE utilization while minimizing the on-chip interconnectarea. We also, show a potentially promising training approach,FIGS-Train, that helps improve the accuracy of an N:M struc-tured sparse attention model. We develop even more efficientaccelerators for attention-based workloads by combining fine-grained sparsity with FIGS hardware architectures designed toimprove PE efficiency.To summarize, we make the following contributions:•We propose a new hardware microarchitecture, FIGS, thathelps accelerate N:M structured sparse models withoutincreased input BW requirements.•We propose a new sparsity training recipe, FIGS-Train,1that can be accelerated using N:M structured HW.•We compare FIGS microarchitecture, with current SOTAN:M accelerators.•We train attention-based models using FIGS-Train,achieving better accuracy than current state-of-the-artN:M structured sparsification techniques.II. FIGS A RCHITECTUREA. Processing Elements(MAC PE)Each processing element comprises of βmultiplier units.The FIGS architecture works in a systolic array-inspiredweight-stationary format, storing all sparse weights in a MACPE register and metadata. We have bitmask as metadata, as thatwould result in the lowest overhead for the structured sparsecase. The MAC PEs send out βmetadata to its correspondingSwap Reg , and it gets the appropriate input activation input.Each multiplier and adder completes the MAC operation andforwards an output to the downstream MAC PE. We position βas a configurable parameter, as with newer technology nodes,it is possible to have fewer registers between PEs [14], [17].This helps in reducing the area of the complete engine.B. Swap RegSwap Reg acts as input staging unit. It takes in γinputwords from the engine input or the previous swap reg. EachSwap Reg registers these γinput words in flops. The SwapReg’s main task is to provide appropriate data to the MACPEs. Using the meta-data provided by the MAC PEs, theSwap Reg would generate βcorresponding output words.Multiple neighboring Swap Regs can also talk to each otherand exchange their stored input words. Depending on thearchitectural parameter, α, a single Swap Reg takes all the inputactivation of the αSwap Regs , making a total of α·γinputwords. This means each Swap Regs has to select the βoutputactivation words from (α+ 1)γinputs. Thus a bigger α/β/γvalue results in a bigger Swap Reg .C. The FIGS EngineWe build the complete FIGS engine using MAC PEs andSwap Reg as building blocks. A FIGS engine has NRows =RandNCols =C. All the MAC PEs are accompanied by aSwap Reg at their input. Thus the whole FIGS engine has R* C MAC PEs and Swap Regs .γdepends on the maximum allowable on-chip bandwidthcapacity and required sparsity support. βwill depend onthe engine’s required running frequency and the synthesistechnology node. Theoretically, this can be any value between1 and R. Finally, αdepends on the maximum sparsity supportrequired. We calculate α=MmaxNmin·γ.Once, we have all the configuration parameters of the FIGSengine, we get an engine with the total number of MAC units=R∗C∗β. The engine takes in C∗γwords as input percycle and generates R∗βoutput words per cycle.D. DataflowWith the full FIGS engine now described, we move onto how the FIGS accelerates flexible N:M sparsity withoutchanging the input BW. In Figure 3, we show 2:4 sparse matrixwith input BW twice of the dense architecture. Each row ofA matrix with 2:4 structured sparsity is mapped to a singlecolumn of MAC PEs. Each column of MAC PEs takes βofrows of matrix A and generates βpartial sum. To support 2:4sparsification, with have need α=MN·γ= 1.In the current configuration, two neighboring Swap Regs canexchange the input activations. We can see this in action, whenin row 1 of the engine, the First MAC PE, takes Element 3 asinput based on the metadata. This element has been providedby the Swap Reg of row 2. In each cycle, the Swap Regforwards γ= 2input activation to the next Swap Reg but givesoutβ= 4activations to the MAC PE. Using this mapping, weensure, that PEs are fully utilized while allowing higher N:Msparsity, for the same γ. The full execution of this would take’T’ cycles.Now, in order to execute 1:2 or 1:1 sparsity, we can stilluse the same mapping; just the Swap Regs would not need toswap data at that time.III. FIGS-T RAINA. The RecipeNow that we understand the abilities of the FIGS accel-erator, we propose FIGS-Train, a specialized training recipecapable of efficient sparsification. Figure 4 shows the trainingschedule for FIGS-Train. The intuition is that by keeping thesame number of non-zeros in the row, we keep reducing theblock size. It is important to understand each progressive stepwould be a subset of the previous sparsification. For example,in the figure, we 1:4 ∈2:8∈4:16∈8:32. Thus, this approachhelps the gradients gather locally before pruning which helpsachieve better accuracy.B. Training methodology and resultsWe applied this technique for weight sparsification at var-ious locations in two attention-based models(ViT [9] andSwin V2 [19]. We trained these models on imagenet-1k [8]dataset with various amounts of sparsity. Table I summarizesthe results for the 2 models at different sparsity levels. Wecompare the results with SR-STE [29], the current SOTAtechnique of N:M structured sparsification.We found the two feed-forward layers in each layer are themost robust. Hence we always sparsified the weights in thoselayers. Next, we also pruned the weights of Query and Keymatrices. We found pruning the weights of the Value matrixhad the biggest impact on the network accuracy; Hence wedid not prune the Value layer’s weights.IV. E VALUATIONSA. Experimental SetupWe convert the feed-forward layer of the encoder block of2 attention-based models to GEMM operations. We analyze2γ FFsMAC PESWAPREGSWAPREGMAC PESWAPREGMAC PEMAC PESWAPREGSWAPREGMAC PESWAPREGMAC PEMAC PESWAPREGSWAPREGMAC PESWAPREGMAC PEγ Wordsα*γ Wordsβ W ordsSWAP REGβ metadataX X X X W0M0W1M1W2M2W3β metadataM3β Activation W ordsMAC PE+# Cols = C# Rows = Rβ Multipliers(α+1)* γ - β Muxα REGs γ Activation W ords+ + + γ Activation W ordsγ Activation W ordsFig. 2. FIGS Micro architecture design for β= 4 andα= 2.1 2 3 41215 16161 2 3 41 2 3412 341516151615 1611 1122 2216 1616 164B32T13423132163132AW A = 328123H A4212161231444332303132123BA123088816TFig. 3. Dataflow for runing 2:4 structured sparse matrix multiplicationwith #Cols = 16, #Rows = 8, γ= 2,β= 4 andα= 1. Note this otherdesign SOTA design requires a minimum γ= 4.Total EpochsSparsifyDDenseFinal sparsity= 1:48:32 4:16 2:8 1:4Uniform DistributionFig. 4. Breakdown of training epochs for FIGS-train recipe.the training time for the layer when trained with the FIGS-Train recipe. We compare the runtime of these operations ona systolic array, SOTA N:M accelerators like STA/Vegeta, andcompare them with three configurations of FIGS. We assumethe same number of MACS = 512 MACs for all architectures.TABLE ISPARSITY TRAINING RESULTS WITH FIGS. FF MEANS SPARSITY ISPRESENT IN THE FEED-FORWARD LAYERS . QK MEANS THE SPARSITY ISPRESENT IN THE QUERY AND KEY WEIGHTS .Model Sparsity ViT Swin V2Dense / 76.369 83.45SR-STE 1:8(FF) 77.869 81.437FIGS 1:8(FF) 78.175 81.466SR-STE 1:8(FF + QK) 81.218FIGS 1:8(FF + QK) 81.438B. Performance AnalysisFigure 5 shows the runtime of the SWIN and VIT Feed-Forward layers on the 5 accelerators. We normalized theruntime using the longest runtime (runtime for a dense systolicaccelerator). We first observe that a systolic accelerator with noacceleration ability for sparse operations results in the highestruntime. STA/VEGETA is a state-of-the-art architecture for ac-celerating N:M structured sparsity, but these are bottleneckedby the input bandwidth to the compute array. We assume4x the dense input bandwidth for all architectures. With this,STA/VEGETA can only accelerate layers with 1:4 or higheramount of sparsity. Thus for ViT, it can accelerate only thelast sparsification phase with 1:4 sparsity, while for SWIN, itcan accelerate the last 2 sparsification phases with 2:4 and 1:2sparsity. Compared to these, FIGS architectures with the sameinput BW perform much better. FIGS( α= 1) can acceleratethe sparsity ratio upto 2:8, FIGS( α= 2) can accelerate thesparsity ratio upto 4:16, and FIGS( α= 3) can accelerate thesparsity ratio upto 8:32.Hence, we observe that all FIGS configurations performmuch better than existing SOTA accelerators. For ViT with1:4 sparsity, FIGS with ( α= 3) achieve 4.4x speedup overS2TA and Vegata, while 2.41x speedup for Swin v2 with 1:2sparsity.3Fig. 5. Runtime Comparison of fine-grained runtime sparsification.V. F UTURE WORKSome of the potential directions we are considering for thiswork are:-•Our proposed FIGS technique provides a promising di-rection for accelerating attention-based models with N:Mstructured sparsity. One possible direction for future workis to investigate the efficacy of FIGS-Train in larger mod-els and models other than those for image classification.•We would also explore how to club this technique withother existing sparsification techniques like block spar-sity, butterfly sparsity, etc.•We would also like to do an area-energy-performanceanalysis of FIGS uarch to cap its ability and view itsfeasibility in realistic implementations.VI. C ONCLUSIONIn this work, we proposed FI ne-G rained S parsification(FIGS) techniques to accelerate attention-based models with-out increasing the on-chip bandwidth requirement. We showedthat our proposed FIGS microarchitecture can achieve high PEutilization while minimizing the on-chip interconnect area. Ourproposed FIGS-Train training recipe also showed promisingresults in improving the accuracy of N:M structured sparse at-tention models. We compared the performance of our proposedFIGS microarchitecture with state-of-the-art N:M acceleratorsand showed that it outperforms existing approaches.REFERENCES[1] “Tensorfloat-32 in the a100 gpu accelerates ai training, hpc upto 20x,” 2019, https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/.[2] M. Behnke and K. Heafield, “Losing heads in the lottery:Pruning transformer attention in neural machine translation,” inProceedings of the 2020 Conference on Empirical Methods inNatural Language Processing (EMNLP) . Online: Association forComputational Linguistics, Nov. 2020, pp. 2664–2674. [Online].Available: https://aclanthology.org/2020.emnlp-main.211[3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al. , “Language mod-els are few-shot learners,” Advances in neural information processingsystems , vol. 33, pp. 1877–1901, 2020.[4] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan,H. Edwards, Y . Burda, N. Joseph, G. Brockman, A. Ray, R. Puri,G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan,S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian,C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis,E. Barnes, A. Herbert-V oss, W. H. Guss, A. Nichol, A. Paino, N. Tezak,J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse,A. N. Carr, J. Leike, J. Achiam, V . Misra, E. Morikawa, A. Radford,M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew,D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba, “Evaluatinglarge language models trained on code,” 2021.[5] T. Chen, J. Frankle, S. Chang, S. Liu, Y . Zhang, Z. Wang, and M. Carbin,“The lottery ticket hypothesis for pre-trained bert networks,” 2020.[6] R. Child, S. Gray, A. Radford, and I. Sutskever, “Generating longsequences with sparse transformers,” arXiv preprint arXiv:1904.10509 ,2019.[7] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts,P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al. , “Palm: Scalinglanguage modeling with pathways,” arXiv preprint arXiv:2204.02311 ,2022.[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet:A large-scale hierarchical image database,” in 2009 IEEE conference oncomputer vision and pattern recognition . Ieee, 2009, pp. 248–255.[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai,T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al. ,“An image is worth 16x16 words: Transformers for image recognitionat scale,” arXiv preprint arXiv:2010.11929 , 2020.[10] EMNLP, “Emnlp 2017 second conference on machine translation(wmt17),” https://www.statmt.org/wmt17/, 2017.[11] U. Evci, T. Gale, J. Menick, P. S. Castro, and E. Elsen, “Rigging thelottery: Making all tickets winners,” in International Conference onMachine Learning . PMLR, 2020, pp. 2943–2952.[12] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressingdeep neural networks with pruning, trained quantization and huffmancoding,” arXiv preprint arXiv:1510.00149 , 2015.[13] Y . He, X. Zhang, and J. Sun, “Channel pruning for accelerating verydeep neural networks,” in 2017 IEEE International Conference onComputer Vision (ICCV) , 2017, pp. 1398–1406.[14] Intel, “Presentation deck: Intel architecture day 2021,” 2021,https://download.intel.com/newsroom/2021/client-computing/intel-architecture-day-2021-presentation.pdf.[15] G. Jeong, S. Damani, A. R. Bambhaniya, E. Qin, C. J. Hughes,S. Subramoney, H. Kim, and T. Krishna, “Vegeta: Vertically-integratedextensions for sparse/dense gemm tile acceleration on cpus,” in 2023IEEE International Symposium on High-Performance Computer Archi-tecture (HPCA) , 2023, pp. 259–272.[16] N. P. Jouppi, , C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa,S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. l. Cantin,C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb,T. V . Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R.Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey,A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar,S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke,A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Na-garajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick,N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani,C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing,M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V . Vasudevan, R. Wal-ter, W. Wang, E. Wilcox, and D. H. Yoon, “In-datacenter performanceanalysis of a tensor processing unit,” in Proceedings of the 44th AnnualInternational Symposium on Computer Architecture (ISCA) , 2017.[17] N. P. Jouppi, D. Hyun Yoon, M. Ashcraft, M. Gottscho, T. B. Jablin,G. Kurian, J. Laudon, S. Li, P. Ma, X. Ma, T. Norrie, N. Patil, S. Prasad,C. Young, Z. Zhou, and D. Patterson, “Ten lessons from three genera-tions shaped google’s tpuv4i : Industrial product,” in 2021 ACM/IEEE48th Annual International Symposium on Computer Architecture (ISCA) ,2021, pp. 1–14.[18] S.-C. Kao, A. Yazdanbakhsh, S. Subramanian, S. Agrawal, U. Evci, andT. Krishna, “Training recipe for n: M structured sparsity with decayingpruning mask,” arXiv preprint arXiv:2209.07617 , 2022.[19] Z. Liu, H. Hu, Y . Lin, Z. Yao, Z. Xie, Y . Wei, J. Ning, Y . Cao, Z. Zhang,L. Dong, F. Wei, and B. Guo, “Swin transformer v2: Scaling up capacityand resolution,” 2022.[20] Z.-G. Liu, P. N. Whatmough, and M. Mattina, “Systolic tensor array: Anefficient structured-sparse gemm accelerator for mobile cnn inference,”IEEE Computer Architecture Letters , vol. 19, no. 1, pp. 34–37, 2020.[21] Z.-G. Liu, P. N. Whatmough, Y . Zhu, and M. Mattina, “S2ta: Exploitingstructured sparsity for energy-efficient mobile cnn acceleration,” in2022 IEEE International Symposium on High-Performance ComputerArchitecture (HPCA) , 2022, pp. 573–586.[22] NVidia, “Accelerating inference with sparsity usingthe nvidia ampere architecture and nvidia tensorrt.”4[Online]. Available: https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/[23] J. Sevilla, L. Heim, A. Ho, T. Besiroglu, M. Hobbhahn, and P. Villalobos,“Compute trends across three eras of machine learning,” arXiv preprintarXiv:2202.05924 , 2022.[24] W. Sun, A. Zhou, S. Stuijk, R. Wijnhoven, A. O. Nelson, hongshengLi, and H. Corporaal, “Dominosearch: Find layer-wise fine-grained n:msparse schemes from dense neural networks,” in Advances in NeuralInformation Processing Systems , vol. 34, 2021.[25] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con-volutional neural networks,” in International conference on machinelearning . PMLR, 2019, pp. 6105–6114.[26] Y . Tay, M. Dehghani, D. Bahri, and D. Metzler, “Efficient transformers:A survey,” ACM Comput. Surv. , apr 2022, just Accepted. [Online].Available: https://doi.org/10.1145/3530811[27] Y . N. Wu, P.-A. Tsai, A. Parashar, V . Sze, and J. S. Emer, “Sparseloop:An analytical approach to sparse tensor accelerator modeling,” in2022 55th IEEE/ACM International Symposium on Microarchitecture(MICRO) , 2022, pp. 1377–1395.[28] Y . Zhang, M. Lin, Z. Lin, Y . Luo, K. Li, F. Chao, Y . Wu, and R. Ji,“Learning best combination for efficient n: M sparsity,” arXiv preprintarXiv:2206.06662 , 2022.[29] A. Zhou, Y . Ma, J. Zhu, J. Liu, Z. Zhang, K. Yuan, W. Sun, and H. Li,“Learning n:m fine-grained structured sparse neural networks fromscratch,” in International Conference on Learning Representations ,2021. [Online]. Available: https://openreview.net/forum?id=K9bw7vqps5 |
RIkigVU6you | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023A Metric Driven Approach to Mixed PrecisionTrainingGil Tabak, Mitchelle Rasquinha,Google. tabakg@google.com, mrasquinha@google.comAbstract —As deep learning methodologies have developed,it has been generally agreed that increasing neural networksize improves model quality. However, this is at the expenseof memory and compute requirements, which also need to beincreased. Various efficiency techniques have been proposed torein in hardware costs, one being the use of low precisionnumerics. Recent accelerators have introduced several different8-bit data types to help accommodate DNNs in terms of numerics.In this paper, we identify a metric driven methodology to aid inthe choice of numerics. We demonstrate how such a methodologycan help scale training of a language representation model. Thetechnique can be generalized to other model architectures.I. I NTRODUCTIONThe wide success of Deep neural networks has led tocontinued increases in model sizes and the computing re-sources needed to train them. Further the introduction of LargeLanguage Models has dramatically increased this demand fortraining and serving. Such a massive demand for systemresources outperforms Moore’s Law and hardware capabili-ties by a wide margin. Several model efficiency techniqueshave been proposed to mitigate this unprecedented demand[5], [11], including the use of reduced precision operations.Quantization - the process of reducing the number of bitsused to represent a number, can improve the performance ofdeep learning models by reducing the amount of memory andcomputational power required.Deep learning training today includes a wide range ofdata types. Common floating point types include IEEE singleprecision, FP32 mode for single precision, IEEE half precision[8], and bfloat16 [1], [7]. More recently 8 bit types havebeen introduced in deep learning accelerators with trade-offsbetween the exponent and the mantissa bits to accommodatethe needs of different operations within a model. In additionto floating bit representations integer hardware has also beenintroduced and has two key advantages – (1) Area and energyefficient hardware units (2) Fewer sources of introduced errorwithin the accumulation hardware unit. A given neural networkaccelerator may provide a few different numerical data typesdepending on its applicability to operations within the modelstructure. While the choice of data types provides flexibilityin training, it is often a complex search for the right set ofnumerics for a given model. At bit widths of 16 bits and lowera careful quantization application is required, without whichmodel quality suffers.We make the following contributionsDevelop a metric driven methodology to guide the use ofdifferent low precision numeric formats.Fig. 1. Illustration of the neural network graph modification during quanti-zation.Demonstrate the methodology predicts training qualityusing different mixed precisions for the BERT model.II. R ELATED WORKThe use of low precision numerics in inference has beenwidely studied and as shown significant benefits in termsof compressing the models while retaining model quality.The use of 8 bit integer for inference was introduced in[6]. A comprehensive list of different techniques to use lowprecision numerics can be found in [3]. Recently, acceleratorshave introduced multiple low precision formats [9], [13],[14] further extending their use in both training and servingworkloads. [16] have shown that 8-bit floating representationcan be used to train convolutional neural networks, with thehelp of stochastic rounding.FP8 and Int8 hardware implementations feature reduced bitwidth multiply-accumulate(MAC) units, thus attaining veryhigh energy, latency, and bandwidth gains compared to 32and 16-bit counterparts. More aggressive bit width reductions,also known as binary quantization have also been explored in[12]. In this paper we focus on the use of 8 bit low precisionformats for training neural networks.III. M ETHODOLOGYQuantization is typically applied to compute intensive ma-trix multiplication operations within a neural network. Westudy uniform integer quantization with dynamic scaling forimproved model performance and power. For each operand ofthe dot product, quantization is described as follows:Xint8=dXbfloat16c; =max(jXbfloat16 )j2n11(1)1Xbfloat16 is the high precision floating point format and Xisthe quantized counterpart, is the quantization step size andnis the bit width of the quantized tensor. The quantizationstep size is calculated for each quantized operand at everytraining step commonly referred to as dynamic quantization.There are several choices for the rounding function d:cwiththe default IEEE rounding technique being round to the nearesteven (RTNE). We study three different 8-bit formats, namelyINT8, E4M3 and E5M2. E4M3 and E5M2 are jointly referredto as FP8 formats. FP8 quantized values conform to therules described in [9]. An FP8 format shares the 8 availablebits between exponent (e)and mantissa (m)and can be moregenerally described as an EeMm format. One bit is reservedfor the sign. It is common for the exponent itself to be biasedto shift the expressible range.The framework level quantization can vary between imple-mentations and the following are specific to our implementa-tion:In all 8-bit formats, out-of-range tensor elements arebased on the following rule: Tensor elements over themax expressible value are saturated at the max expressiblevalue. Tensor elements whose absolute value is smallerthan the smallest sub-normal are represented by zero.We use two different forms of rounding - (1) round-to-nearest-even and (2) stochastic roundingIn both FP8 and INT8, some form of scaling is appliedbefore and after the matrix multiplication. After matrixmultiplication, descaling is applied to the output as shownin figure 1. The output prior to re-scaling is typicallynot represented by 8-bits and depends on the multiple-accumulate unit.The scaling may be done at the tensor level meaning asingle number is used for each scaling, or at finer levelsof granularity on non-contracting dimensions. We presentresults for a variation of scale granularity .Throughout, we assume scaling is always done to alignthe absolute max value to the maximum expressible valueof the chosen format with symmetric quantization . Themaximum expressible values are 448 for E4M3, 57344for E5M2 and 127 for Int8.The use of reduced precision numerics is a lossy compres-sion technique. The choice of quantization parameters playsa critical role in determining the magnitude of the introducederror.A. Model evaluationDeep learning architectures comprise several computationsthat can be abstracted into a few basic operations suchas convolution and matrix multiplication. Highly optimizedcompute kernels are available from different machine learningframeworks for these operations.We evaluate our methodology on the BERT architecturefrom [2], [15]. The baseline model is trained in bfloat16 andtensor inputs to all the major matrix multiplications weresampled to compute the mean, variance, skew and Kurtosis.Figure2 plots the distributions for one such set of tensors.Fig. 2. Distributions of the input tensors to the query projection dot operationin the forward and backward passes. Red text denotes the tensor type.Fig. 3. A comparison of the relative error profile of INT8 and two FP8formats, assuming RTNE.In general we find that the weights have a normal distributionand the gradients have a log-normal distribution. The referencemodel is available at [10]IV. R ESULTSAn essential component to minimizing overall model qualitydegradation is to minimize per operation quantization error.The quantization error depends on the distribution of the highprecision tensors and the properties of the reduced-precisionformat. The quantization error can be categorized into (1)clipping error (2) rounding error. Clipping error, is the lossof accuracy due to values lying outside the dynamic rangeof a format i.e overflow or underflow. In our implementationall overflow values are capped at the max value and all2RHS (RTNE) LHS (RTNE) gradient (RTNE) gradient (Stochastic)int8 e4m3 e5m2 int8 e4m3 e5m2 int8 e4m3 e5m2 int8 e4m3 e5m2tensor 17.14 17.12 17.13 17.01 17.11 17.11 1.4 17.07 17.15 12.73 17.12 17.13channel 17.12 17.09 17.09 17.20 17.18 17.10 2.3 x x 17.12 x xfine-grained 17.13 17.05 17.09 17.13 17.10 17.13 8.2 x x 17.14 x xTABLE IAREA-UNDER -THE-CURVE (AUC) FOR THE EVAL ACCURACY IN BERT TRAINING .Fig. 4. Student’s T-Distribution of the quantization error when INT8, E4M3and E5M2 were each used as input data types.underflow values are represented by zero. Rounding error isthe loss of accuracy due to values lying between numbers (inthe low-precision domain) and varies based on the roundingparameters. Figure 3 plots the ranges of the three different8-bit formats, illustrating the trade-offs made between them.The relative error is defined as 2 for a given value vwhere ~vindicates the reduced-precision value.RE=jv~vjv(2)While INT8 captures values in a relatively narrow rangeto high precision, FP8 formats trade off high precision for awider dynamic range.A. Quantized matrix multiplication: An illustrative exampleTo demonstrate the differences between the precision ofquantized matrix multiplication using FP8 versus INT8 amongdifferent input distributions we conducted a probabilistic erroranalysis. We chose the backward error based on the inner-product level definition given in [4]. Analysing the backwarderror clearly shows the differences between the quantizationformat of choice.The error analysis assumes matrices of size 512512sampled using a t-distribution using a range of normalityparameters (annotated in the plot). The backward error is givenby 3 for inputs LandRwhereindicates matrix multiplicationandQ(;)indicates quantized matrix multiplication (usingper-vector scaling).BE=jLRQ(L;R)jjLjjRj(3)While the error can vary widely for INT8 depending on theheavy-tailedness of the inputs, it is much more constrained forFP8 formats. In our example we found E4M3 has a smallererror than E5M2, this may differ depending on the distributionand quantization methodology. For example, if the magnitudeof the tensor entries varies widely for different inner-products,E5M2 will enable more flexibility even when using tensor-level quantization.B. BERT training resultsTo test the suitability of each format for different tensors,we varied a subset of the quantization parameters applied to asubset of the tensors. We broadly categorised tensors into RHS,LHS, and gradient categories. Gradients are always upstreamgradients. The LHS were the activations, except inside the self-attention mechanism, where they refer to the key (in the keystimes query computation) or probabilities (in the probabilitytimes value computation).In Table I we show the area-under-the-curve (AUC) of theeval accuracy, to compare both converged and non-convergedruns. An AUC of 17:13was measured for the baseline. Thestandard deviations for experiments with converged runs werein the range of [0:02;0:08]. While ‘tensor’ refers to using asingle value for scaling the entire tensor, ‘channel’ refers tousing a tensor for each non-batch/non-contracting dimension(batch dimension here refers to the matrix multiply batch). The‘fine-grained’ level also includes batch dimensions, exceptingthe axis corresponding to individual examples.As the RHS (mostly weights) were not heavy-tailed, theINT8 format was lossless regardless of the quantizationgranularity level. In comparison, the FP8 formats producedvery close results, with degradation essentially within thenoise level. Meanwhile, applying INT8 to the LHS (mostlyactivations, with a higher dynamic range than the weights)produced a slightly more noticeable degradation at the tensorlevel. However, this degradation can be overcome by usingfiner granularities. Finally, the gradients are extremely heavy-trailed. Using int8 without stochastic rounding never con-verged, although there was a clear pattern of improvementas we increased the level of granularity. Both FP8 formatsconverge when applied for gradients with RTNE. Finally, whenconsidering stochastic rounding for upstream gradients, the3INT8 results still did not converge when applying tensor levelquantization. It was necessary to use finer scales to achieveconvergence. Both FP8 formats performed at the baselinelevel.While the results provided are restricted to a single model,we believe the methodology is more widely applicable to otherclasses of models and can be evaluated on any ML acceleratorwith low precision numerics support. Additional frameworksupport for applying the quantization technique is also requiredfor an evaluation.V. C ONCLUSIONWe have identified a methodology to use different lowprecision numerical formats. At small bit widths of 8 bits andbelow, the minimal dynamic range requires careful mappingof operations within the model to the different multiply-accumulate units on the underlying hardware. This step iscrucial to realizing the gains from low precision numericalformats without compromising the quality requirements of themodel. The search space for bit width allocation increasesexponentially with more layers and more numerical formats.Future work aims at identifying metrics that can help narrowthe search space based on information within the baseline highprecision tensors.REFERENCES[1] “Bf16 documentation,” https://cloud.google.com/tpu/docs/bfloat16.[2] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-trainingof deep bidirectional transformers for language understanding,” CoRR ,vol. abs/1810.04805, 2018.[3] A. Gholami, S. Kim, Z. Dong, Z. Yao, M. W. Mahoney, and K. Keutzer,“A survey of quantization methods for efficient neural network infer-ence,” CoRR , vol. abs/2103.13630, 2021.[4] N. J. Higham and T. Mary, “Sharper probabilistic backward erroranalysis for basic linear algebra kernels with random data,” SIAMJournal on Scientific Computing , vol. 42, no. 5, pp. A3427–A3446, 2020.[5] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neuralnetwork,” 2015.[6] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. G. Howard, H. Adam,and D. Kalenichenko, “Quantization and training of neural networks forefficient integer-arithmetic-only inference,” CoRR , vol. abs/1712.05877,2017.[7] D. D. Kalamkar, D. Mudigere, N. Mellempudi, D. Das, K. Banerjee,S. Avancha, D. T. V ooturi, N. Jammalamadaka, J. Huang, H. Yuen,J. Yang, J. Park, A. Heinecke, E. Georganas, S. Srinivasan, A. Kundu,M. Smelyanskiy, B. Kaul, and P. Dubey, “A study of BFLOAT16 fordeep learning training,” CoRR , vol. abs/1905.12322, 2019.[8] P. Micikevicius, S. Narang, J. Alben, G. F. Diamos, E. Elsen, D. Garc ́ıa,B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu,“Mixed precision training,” CoRR , vol. abs/1710.03740, 2017. [Online].Available: http://arxiv.org/abs/1710.03740[9] P. Micikevicius, D. Stosic, N. Burgess, M. Cornea, P. Dubey, R. Grisen-thwaite, S. Ha, A. Heinecke, P. Judd, J. Kamalu, N. Mellempudi,S. Oberman, M. Shoeybi, M. Siu, and H. Wu, “Fp8 formats for deeplearning,” 2022.[10] MLCommonsTM, “Bert,” https://github.com/mlcommons/training/tree/master/language model/tensorflow/bert.[11] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruningconvolutional neural networks for resource efficient transfer learning,”CoRR , vol. abs/1611.06440, 2016.[12] P. Pham, J. A. Abraham, and J. Chung, “Training multi-bit quantizedand binarized networks with A learnable symmetric quantizer,” CoRR ,vol. abs/2104.00210, 2021.[13] X. Sun, J. Choi, C.-Y . Chen, N. Wang, S. Venkataramani, V . Srinivasan,X. Cui, W. Zhang, and K. Gopalakrishnan, “Hybrid 8-bit floating point(hfp8) training and inference for deep neural networks,” in NeuralInformation Processing Systems , 2019.[14] X. Sun, J. Choi, C.-Y . Chen, N. Wang, S. Venkataramani,V . V . Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan,“Hybrid 8-bit floating point (hfp8) training and inference for deepneural networks,” in Advances in Neural Information ProcessingSystems , H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch ́e-Buc,E. Fox, and R. Garnett, Eds., vol. 32. Curran Associates, Inc.,2019. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2019/file/65fc9fb4897a89789352e211ca2d398f-Paper.pdf[15] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,L. Kaiser, and I. Polosukhin, “Attention is all you need,” CoRR , vol.abs/1706.03762, 2017.[16] N. Wang, J. Choi, D. Brand, C.-Y . Chen, and K. Gopalakrishnan,“Training deep neural networks with 8-bit floating point numbers,”Advances in neural information processing systems , vol. 31, 2018.4 |
PibYaG2C7An | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023Efficient Deployment of Transformer Models onEdge TPU Accelerators: A Real System EvaluationBrendan Reidy, Mohammadreza Mohammadi, Mohammed Elbtity, Ramtin ZandUniversity of South Carolina. bcreidy@email.sc.edu, mohammm@email.sc.edu, messa@email.sc.edu, ramtin@cse.sc.eduAbstract —Transformer models have become a dominant ar-chitecture in the world of machine learning. From naturallanguage processing to more recent computer vision applications,Transformers have shown remarkable results and established anew state-of-the-art in many domains. However, this increasein performance has come at the cost of ever-increasing modelsizes requiring more resources to deploy. Machine learning (ML)models are used in many real-world systems, such as robotics,mobile devices, and Internet of Things (IoT) devices, thatrequire fast inference with low energy consumption. For battery-powered devices, lower energy consumption directly translatesinto longer battery life. To address these issues, several edgeAI accelerators have been developed. Among these, the CoralEdge TPU has shown promising results for image classificationwhile maintaining very low energy consumption. Many of thesedevices, including the Coral TPU, were originally designed toaccelerate convolutional neural networks, making deployment ofTransformers challenging. Here, we propose a methodology todeploy Transformers on Edge TPU. We provide extensive latency,power, and energy comparisons among the leading-edge devicesand show that our methodology allows for real-time inferenceof large Transformers while maintaining the lowest power andenergy consumption of the leading-edge devices on the market.Index Terms —Tensor Processing Unit (TPU), TransformerModels, Edge AI Accelerators, BERT.I. I NTRODUCTIONSince the introduction of Transformer models in 2017 [1],they have quickly risen to prominence in many areas, suchas natural language processing and computer vision. Thesemodels have shown state-of-the-art results in a wide domainof tasks from machine translation [1] and question-answering[2] to computer vision tasks like image segmentation [3].Many applications, such as self-driving cars, IoT devices,satellites, drones, and robots, require deploying models forreal-time inference using low-power energy-constrained sys-tems. Transformer-based models, however, often include alarge number of processing layers, along with hundreds ofmillions of parameters. For instance, the Bidirectional En-coder Representations from Transformers (BERT) [4] modelscontain 109 million and 340 million parameters for the Baseand Large models, respectively [5]. Therefore, deploying suchmassive models at the edge for real-time applications withtight restrictions on power and energy is challenging.The surge in demand for specialized hardware for AIapplications has resulted in a rapidly expanding industry foredge AI accelerators. Anticipating this trend, several compa-nies have developed their own specialized accelerators. TheNVIDIA Jetson Nano [6] is a low-cost development boardfor machine learning (ML) applications that employ NVIDIATensorRT1 as the main driver. The Intel Movidius NeuralCompute Stick 2 (NCS2) [7] is a small, low-power USBco-processor that enables the deployment of Deep NeuralNetworks (DNNs) and is powered by the Myriad VisionProcessing Unit (VPU). Google’s Coral Edge TPU is anotherdevice that leverages tensor processing units (TPUs) to accel-erate ML applications. The coral TPU is used as a co-processoron Coral’s Dev Board, as well as a USB accelerator [8] thatcan be integrated with tiny computers such as Raspberry Pi.With the peak performance of four tera-operations per second(TOPS) and two TOPS/W, Coral Edge TPU can be one ofthe promising technologies for realizing real-time Transformermodels. While several studies have used the Coral TPU toaccelerate their DNN applications, to the best of the authors’knowledge, no work has deployed Transformer-based modelson Coral Edge TPU accelerators.Herein, we propose a methodology to deploy Transformermodels on the Coral Edge TPU. Because Transformers areoften very large, training them is time-consuming, compu-tationally expensive, and often requires very large datasetsthat are not always publicly available. For these reasons, it iscrucial that our methodology support a wide range of existingTransformer architectures such as Vision Transformers (ViT)[9], left-right Transformers, also known as (a.k.a) Encoder-Decoder Transformers [1], [2] and BERT-like [4] Transform-ers without any need for retraining, aside from possibleretraining associated with quantization. Here, we modify thecomputational graph to allow the model to run on the EdgeTPU while remaining functionally identical to the originalmodel. While common model optimization techniques such aspruning, knowledge distillation, hyper-parameter optimization,and neural architecture search ( [10] provides an overview ofthese techniques) can be used to improve the size, latency,power consumption, and energy consumption of models, thefocus of this paper is on the efficient deployment of existingTransformer architectures on the Coral Edge TPU. Some or allof the aforementioned optimization techniques can be used ontop of our work to further improve the latency and powerconsumption of models. Although we focus on the BERTTransformer architecture for the main body of the work, weshow that this methodology can be generalized for both BERT-like and left-right Transformers.1Fig. 1: BERT Architecture with nencoder layers [1].II. B ACKGROUNDA. Transformer ModelTransformer models can vary slightly in design, but the corearchitecture remains the same. Transformers use embeddinglayers to turn tokens into vectors of size dmodel , a.k.a hiddensize. The exact number of embedding layers varies from onemodel to another. For instance, BERT uses three embeddinglayers, as shown in Fig. 1. Transformers also employ a stackof attention heads to capture different learned attention asso-ciations using scaled dot product attention that maps queriesand key-value pairs to outputs. Scaled dot product attentionuses a dot product between the queries (Q) and the keys (K)to compute attention scores. These scores are scaled downto create a mean of 0 and a variance of 1, and the Softmaxfunction is applied to generate weights for the values. Theweights are then multiplied by the values (V) to generate theweighted attention scores for the tokens. At the end of themulti-headed attention layer, the values from each attentionhead are concatenated together and passed to a fully-connected(FC) layer and then an activation function is applied.Most Transformers, including GPT-3 [2], and BERT useGaussian Error Linear Units (GELU) [11] as the activationfunction which uses the non-linearity property of RectifiedLinear Units (ReLU) with the regularization property ofDropout [12]. The output of the FC layer is added withprevious layers using a residual connection. In the Encoder,these values are passed to two FC layers where the inner FClayer has size of dff, a.k.a intermediate size, and the outerFC layer has size of dmodel . Again, the output is added withprevious layers and normalized using residual connections.Finally, these values are passed to the next encoder layer,or if there is none, then the classification head/decoder layer.Left-right Transformers have a decoder layer that is nearlyidentical to the encoder layer, except it has one extra multi-headed attention layer before the feed-forward layers called theFig. 2: (a) Edge TPU architecture. (b) PE structure [13].encoder-decoder multi-headed attention. The encoder-decodermulti-headed attention is the same as the encoder multi-headedattention except that the query and key vectors come from theencoder, and the values vector comes from the decoder. BERTwas introduced in 2018 and builds upon prior Transformerarchitectures, with one key difference being bi-directionality.Unlike prior Transformer models, BERT is designed to trainon both left and right contexts for text. Using a pre-trainedBERT model and one additional classification layer, BERTcan be fine-tuned to perform various language tasks.B. Coral Edge TPU ArchitectureIn 2015, Google launched the TPU project in which theyadopted the systolic array architecture to accelerate the DNNoperations [13]. The first version of Google’s TPU was de-signed to only accelerate the DNN inference on the cloud.In 2019, Google launched a smaller and low-power versionof TPU, called Edge TPU, that is suited to accelerate theinference of the DNN at the edge. The Edge TPU uses 8-bit integer (int8) multiply and accumulate (MAC) core unitsin its processing elements (PEs) [8].In general, the systolic array architecture includes a setof processing elements that are formed in single or multi-dimensional arrays that can collectively perform the compu-tation on certain data brought from memory with no need toaccess it from the memory multiple times. The systolic arraysdeveloped for the ML acceleration are designed to implementmatrix-matrix, matrix-vector, and vector-vector multiplicationswhich are the dominant operations in ML workloads. Systolicarrays increase performance by reusing the values fetchedfrom memory and reducing the main memory accesses [14].The dataflow in the systolic array is a mapping scheme thatdepends on the microarchitecture of PEs and determines how2the input data is fed to the array, and how the partial results andoutputs are generated and stored. Google adopted the weightstationary dataflow in their cloud TPU and Edge TPU designs[15], in which, the weights are pre-stored in the core memoryof PEs. At each cycle, the input elements are fed to the PEs andmultiplied by the pinned weights producing partial sums. Thisprocess is vertically distributed over columns in the systolicarray to produce the output results.Figure 2 shows the architecture of the Edge TPU and themicroarchitecture of each PE within its 2D systolic array.The Edge TPU includes activation memory, instruction mem-ory, parameter memory, controller, and PEs. The controllertransfers the data between the off-chip memory and the PEs,fetches parameters and activation into the buffers, and readsthe instructions that will be executed on the PEs. The EdgeTPU supports a variety of commonly-used operations in DNNmodels [8]. Each PE in the Edge TPU has four parallel MACunits, as opposed to the cloud TPU v1 which has only oneMAC unit per PE. As shown in Fig.2, the PEs in Edge TPUhave a single-instruction-multiple-data (SIMD) architecture.They can perform the MAC operation on four data valuesat the same time using four 8-bit fixed point compute lanes.Moreover, each PE has a core memory and a PE memory. ThePE memory is designed as a first in first out (FIFO) buffer thatis shared among all PEs and used to store model activations,partial results, and final outputs. Since Edge TPU has a weight-stationary systolic array, the core memory is used to storemodel parameters, i.e., weights.III. P ROPOSED METHODOLOGY TO DEPLOY SMALL - ANDMEDIUM -SIZED TRANSFORMERS ON EDGE TPUA. Existing Edge TPU Deployment ProcessFor full Edge TPU utilization, several requirements must bemet; otherwise, only parts of the model will run on the EdgeTPU. The Coral documentation [16] contains an exhaustive listof requirements and all supported operations. Here, we onlyfocus on the requirements that are relevant to the Transformerarchitecture.The Edge TPU only supports TensorFlow Lite (TFLite)models. TFLite is a lightweight version of TensorFlow [17]that is optimized for deployment on edge systems. Using theTFLite interpreter, different delegates can be used dependingon the hardware accelerator, such as NNAPI for androiddevices, GPU for mobile GPUs, Hexagon for DSPs, Core MLfor iOS devices, and libedgetpu , which is the focus of thiswork, for the Coral Edge TPU. Note that TFLite only supportsa subset of all TensorFlow operations and the Coral EdgeTPU only supports a subset of all TFLite operations. A listof supported Edge TPU operations and any known limitationscan be found at [16]. To fully utilize the TPU, the model mustcontain only supported Edge TPU operations.Since the Edge TPU only supports 8-bit integer operations,any models aimed to be deployed on Edge TPU must beconverted from 32-bit floating point (fp32) to int8 or unsignedint8 for all parameters, activations, and operations. This can bedone using either quantization-aware training (QAT) or post-training quantization (PTQ) with a representative dataset. In[18], it is shown that using QAT, BERT can maintain state-of-the-art results using 8-bit integer-only inference. Once themodel has been converted to a quantized TFLite model, theEdge TPU compiler maps the supported operations to the TPUand leaves the remaining operations on the CPU. The compilermaps all supported operations onto one graph to be loadedonto the TPU called the Edge TPU custom op . Currently, theEdge TPU graph only includes consecutive operations that aresupported on Edge TPU. Once the compiler finds an operationin the model that is not supported by the TPU, all the followingoperations will be mapped to the CPU, regardless of beingsupported by TPU or not. Another deployment requirementfor Edge TPU is that all tensor sizes should be constant atcompilation time. After training, we change the batch sizedimension to 1and the sequence length dimension to 128.Moreover, the existing Edge TPU devices do not supportembedding layers. Therefore, since the embedding layers makeup only a small portion of the overall Transformer model, weleave the operation to run on the CPU for inference.To verify whether modifying Transformers based on theexisting requirements mentioned above would be sufficientto successfully deploy them on Edge TPU, we have adaptedBERT-Tiny to BERT-Large models accordingly and tried todeploy them on the Edge TPU. This experiment results incompilation failure or partial compilation for all the models.This is mainly due to the fact that Transformers includeoperations that are currently not supported by Edge TPU.Thus, we develop several methodologies in the followingsubsections to resolve the current deployment limitations ofTransformers on the Edge TPU.B. Proposed Edge TPU Deployment Process for TransformersTo address the existing deployment challenges of the Trans-formers on Edge TPU, it is required to refactor their compu-tational graph to alter their operations to those supported byEdge TPU without altering the model’s functionality. Thuswe developed a flexible in-house TensorFlow Transformermodel using custom Keras layers. This custom Transformermodel allows us to modify any operations in our model andreplace them where necessary. In order to ensure backwardcompatibility with existing Transformers, we map pre-trainedweights onto our model and verify that both models yieldthe same output for the same input. In the following, wediscuss two of the operations in Transformers that cause thecompilation failure in Edge TPU, and propose methods torefactor them such that they can be readily deployed on EdgeTPU.1) Refactoring GELU Activation Function: As mentioned,GELU [11] is used in many Transformers and is defined bythe following equation:gelu(x) =12x[1 +erf(x√2)] (1)3Fig. 3: (a) standard matrix-matrix dot product (b) matrix-matrix dot product using convolutions.where erf(x)is the Gaussian error function which is definedas:erf(x) =2√πZx0e−t2dt (2)The GELU activation function is not currently supportedon Edge TPU. Several approximations for GELU have beendeveloped, including those based on transcendental functions[11] and those based on polynomial equations [18]. Forour purposes, we use the polynomial-based approximation ofGELU known as I-GELU where erf(x)is approximated as:L(x) =sgn(x)·[a·(min(|x|,−b) +b)2+ 1] (3)where a=−0.2888 ,b=−1.769,sgn denotes the signfunction, and min denotes the minimum function. I-GELUis defined as:I−GELU (x) =12x[1 +L(x√2)] (4)However, TFLite does not support the sign function, andthe Edge TPU compiler does not support the absolute valuefunction. Therefore, we further revised the GELU approxima-tion and replaced the sign and absolute value functions withsgn(x)≈tanh(x·103)andabs(x)≈x·sgn(x), respectively.Thus, we approximate L(x)in (3) as:L(x) =tanh(103x)[a[min(x·tanh(103x),−b)+b]2+1] (5)The proposed I-GELU approximation is supported by bothTFLite and Edge TPU. Therefore, in the Transformers, wereplace all instances of GELU with our approximation ofGELU.2) Refactoring Matrix-Matrix Dot Products for FC Layer:Many of the operations in Transformers are matrix-matrix dotproducts. Although the matrix-matrix dot product in the self-attention layer is supported by the Edge TPU, it cannot handlethe matrix-matrix dot products in the FC layers, as describedin the device documentation [16]. To perform matrix-matrixdot products in FC layers, we implement the dot productTABLE I: Bert models’ specifications.ModelHiddensizeAttentionHeadsHiddenLayersIntermediateSizeParameters(millions)Tiny 128 2 2 512 4.4Mini 256 4 4 1024 11.2Small 512 8 4 2048 28.8Medium 512 8 8 2048 41.4Base 768 12 12 3072 109.5Large 1024 16 24 4096 335.1operation using convolutions. This can be done as follows:letAbe an m×ninput matrix, Bbe an n×kweight matrix,andCbe the m×koutput matrix such that A·B=C.This is a standard matrix-matrix dot product, as shown in Fig.3 (a). Now consider a convolution layer where we have kconvolution kernels, each with the size of 1×ncalled Kconv(shown in Fig. 3b). We can map the weights from matrix Bone-to-one such that Kconv[x] =BT[x]. By convolving thekernels Kconv across the input matrix Awith strides of 1and no padding, the resulting matrix will be an m×kmatrixidentical to the original output matrix Cas illustrated in Fig.3.Using the aforementioned strategies, we can successfullycompile small- and medium-sized Transformers, such asBERT-Tiny to BERT-Medium, on Edge TPU. However, thecompilation still fails for larger Transformers such as BERT-Base. Unfortunately, the compiler does not provide detailedinformation about why the larger models cannot compile,so it is unclear whether the compilation fails due to somefundamental hardware limit in the Edge TPU or if there is anissue with the compiler itself. Regardless, in the next section,we discuss methods to identify the source of the issue andresolve it.IV. T HEPROPOSED METHODOLOGY TO DEPLOY LARGETRANSFORMERS ON EDGE TPUBy comparing the architecture of BERT-Medium and BERT-Base (see Table I), we narrow down the possible cause ofthe compilation failure to the increased hidden size, attentionheads, hidden layers, intermediate size, or some combinationof these model parameters. Starting with the BERT-Mediumarchitecture, we change one of the model parameters to matchBERT-Base until we reproduce the issue. Using this strategy,we identify the two layers that cause the compilation to fail:the inner FC layer and the embedding layer.Further, we observe that for the inner FC layer, which usesthe intermediate size from Tab. I, the model compiles forBERT-Medium using a size of 2048 but does not compilefor BERT-Base using a size of 3072 . Motivated by thisobservation, we use a binary search algorithm to determinethe maximum size for the inner FC layer that can be compiledon Edge TPU. We find that when followed by the I-GELUactivation function, the maximum inner FC layer size forEdge TPU is equal to 2728 . Moreover, we find that themaximum size supported varies depending on the type ofactivation function. For instance, with no activation function4Fig. 4: Computational graphs for (a) tanh(x)(b) ReLU( x) (c)I-GELU( x) activation functions.Fig. 5: (a) standard convolution-based fully connected layer (b)fully connected layer partitioned across the output dimension.or using ReLU, Sigmoid, and TanH, the intermediate size canbe a maximum of 5376 neurons. This could be due to thecomputation size of the I-GELU activation function, as it canbe seen in its computational graph in Fig. 4c, which can lead toincreased memory demands beyond what is available in EdgeTPU’s PE memory.To address the aforementioned challenge, we propose par-titioning the inner FC layer into two or more equal parts toreduce the size of the operations in the layer. For the BERT-Base model, we partition the inner FC layer into two parts, asshown in Figure 5. By partitioning the layer along the output,we reduce the size of the operation by splitting it into twom×n/2dot products instead of one m×ndot productand two n/2I-GELU activations. For the BERT-Base model,this leads to 1536 I-GELU neurons, which is less than themaximum 2728 neurons supported by Edge TPU. At the end,we concatenate these two layers to realize the output. Thisapproach allows the model with the intermediate size of 3072 ,which is used in BERT-Base, to compile successfully.Although partitioning the inner FC layer resolves the com-pilation issue for models with larger intermediate sizes, westill observe that increased size of the embedding layer causesthe compilation failure for the model. Therefore, we leverage asimilar partitioning mechanism for the embedding layer acrossthe output dimension. Since the embedding layer itself cannotbe mapped to the Edge TPU, the model’s input for the EdgeFig. 6: Experimental setup. (a) Pi + NCS2 (b) Pi + Coral TPU(c) Coral Dev board (d) Jetson Nano.TPU is the output of the embedding layer. As discussed earlier,here we set the input sequence length to 128; therefore, usingBERT-Base for example, the Edge TPU input includes three128×768matrices. Similar to how we partition the FC layer,we partition the embedding layers across the output dimension.This changes the Edge TPU input from three 128×768matrices to six 128×384matrices.Using the aforementioned partitioning mechanisms in theinner FC layer and embedding layer, we successfully compileand deploy the BERT-Base and Bert-Large models on EdgeTPU. To assess the validity of our approach for other typesof transformers, we create a left-right transformer based onthe model introduced in [1]. Without any modifications to themodel, it fails to compile. However, leveraging our deploymentmethodology, we can compile and deploy this transformermodel on Edge TPU as well, which exhibits the effectivenessof our approach for various architectures and sizes.V. E XPERIMENTAL RESULTSA. Experimental SetupAfter verifying the successful compilation of various-sizedTransformer models on Edge TPU, we evaluate its perfor-mance against well-known edge AI accelerators existing inthe market. In particular, we investigate two experimentalsetups: (1) USB accelerators , where we compare Intel NCS2(Fig. 6a) with Coral TPU USB accelerator (Fig. 6b), and (2)Development Boards , in which we evaluate Coral Edge TPUDev Board (Fig. 6c) against Nvidia Jetson Nano (Fig. 6d).The USB accelerators are integrated as a co-processor withRaspberry Pi 4. There are different settings required to runthe models on each of the edge devices. For the RaspberryPi 4 and both Coral products, we use TFLite models withfp32 and int8 precision, respectively. For the NCS2, we useOpenVino models with fp16 precision. For the Jetson, we useTensorRT models with fp16 precision. Jetson provides twodifferent operating modes, i.e., low-power mode and Max-Nor high-power mode. Here, we use six different BERT modelsfor our experiments: Tiny, Mini, Small, Medium, Base, and5Fig. 7: Inference latency measurements for all models anddevices.Large. Due to the large size of the BERT-Base and BERT-Large models, we only use development boards to run thesemodels. Also, since Jetson Nano could not compile BERT-Large, we only compared the Coral Dev board with RaspberryPi for the BERT-Large model.B. Inference Latency MeasurementFor inference latency measurements, we split the processinto three parts: (1) load the model, (2) allocate the tensorsdepending on the platform, and (3) perform 100 inferencesusing a subset of the Microsoft Research Paraphrase Corpus(MRPC) dataset [19]. We measure the total time taken for 100inferences and report the mean inference time for one inputsample. Figure 7 shows the inference results for all platformsusing the six BERT models. We see that all edge acceleratorsprovide significant speedup over the baseline Raspberry Pi 4CPU. For the smallest model, BERT-Tiny, Coral Dev boardhas the fastest inference speed at 4 ms per inference. For theBERT Mini, Small, Medium, and Base models, we see thatorder of inference speed from least to greatest is as follows:Jetson high power mode, Jetson low power mode, Coral Devboard, Coral USB, NCS2, Raspberry Pi 4. We see that thelarger the model is, the bigger the difference is between thecoral products and the Jetson. Although we do not report themodel load and allocation times, it is important to note thatthe Coral Dev board, Coral USB, Jetson, and Raspberry Pi alltake less than 10 seconds to load and allocate BERT-Medium,while the NCS2 takes over 10 minutes to load and allocatethe same model.1) USB Accelerators: For the USB accelerators, both NCS2and Coral USB accelerator show improvement over the base-line Raspberry Pi 4, except for the case of NCS2 and BERT-Tiny. For BERT-Tiny there is an improvement of 0.76 ×for NCS2 and 5.2 ×for Coral USB accelerator. For BERT-Medium, we observe approximately 6 ×reduction in inferencelatency for Coral USB accelerator compared to NCS2.2) Development Boards: Both development boards offersignificant speedups compared to the Raspberry Pi 4. For theBERT-Tiny model, we observe 3.2 ×and 5.2 ×improvementFig. 8: Dynamic power for all models and devices.over the baseline model using Jetson low and high powermodes. For Coral Dev board, we have 6.5 ×improvement overthe baseline model. Performing the same comparison for theBERT-Base model, we have 33 ×and 48 ×improvement overthe baseline model for Jetson with low and high power, respec-tively. For Coral Dev board, we observe 11 ×improvement ofthe baseline model. For smaller models, Coral Dev board isslightly faster than the Jetson, but for larger models, Jetsonis up to 4.3 ×faster. The faster inference of Jetson, however,is achieved at the cost of significantly more chip resourcesand increased power consumption, as discussed in the nextsubsection.C. Inference Power MeasurementsWe use MakerHawk UM34C USB multimeter to measurethe power dissipation of all devices, except for the Jetson,which has three internal sensors for measuring the input, CPU,and GPU powers. To obtain the average power consumption,we run each model on each platform for five minutes andrecord the corresponding power profiles. Figure 8 showsthe dynamic power measurements for all the models andplatforms. Coral Dev board has the lowest power consumptionacross all experiments. As shown in the figure, the power con-sumption remains roughly unchanged across various modelsfor all the platforms, except for Jetson’s power consumption,which grows with the model size.1) USB Accelerators: For the BERT-Tiny and BERT-Medium models, NCS2 and Coral USB accelerator consumenearly 1.9 ×, and 1.6 ×more power than Raspberry Pi 4 alone.Coral USB consumes 1.3x less power than NCS2.2) Development Boards: For the BERT-Tiny model, CoralDev board achieves 2.1 ×reduction in power compared toRaspberry Pi and a 4.8 ×improvement over Jetson in high-power mode. For the BERT-Base model, Coral Dev boardrealizes a 9.4 ×and 4.8 ×power reduction compared to Jetsonin high-power and low-power modes, respectively. Finally, forthe BERT-Large model, Coral Dev board can achieve 2.4 ×reduction in power dissipation compared to Raspberry Pi.6Fig. 9: Inference energy for all models and devices.D. Inference EnergyIn Fig. 9, we compare the results for inference energy. Asidefrom NCS2 we see that all accelerators significantly improveinference energy over baseline Raspberry Pi.1) USB Accelerators: For the USB accelerators, we com-pare the BERT-Medium model. For The NCS2, the inferenceenergy is 1.3 ×worse than the Raspberry Pi 4 with no acceler-ation. Interestingly, the Coral USB accelerator provides a 5.9 ×and 7.75 ×improvement in inference energy compared to theRaspberry Pi alone and Raspberry Pi with NCS2, respectively.2) Development Boards: Compared to Raspberry Pi, CoralDev board provides a 12 ×decrease in inference energy for theBERT-Tiny model. When compared to Jetson-low and Jetsonhigh, Coral Dev board provides 3 ×and 6×improvements,respectively, for the same model. For the BERT-Base model,Coral Dev board is 1.6 ×and 2.5 ×more efficient than Jetson-low and Jetson-high. Furthermore, Coral Dev board is 31 ×more efficient than Raspberry Pi for the same model. Finally,for the BERT-Large model, Coral Dev board achieves a notable35×energy saving compared to Raspberry Pi.VI. C ONCLUSIONThis paper provides a methodology to deploy Transformermodels on Edge TPU accelerators by identifying the layersin Transformers that are not supported by Edge TPU andrefactoring their computational graph. We provide an extensivecomparison of the leading edge devices on the market forTransformer models. Our methodology can deploy variousTransformer architectures on the Coral Edge TPU and achievesreal-time inference while maintaining the lowest energy con-sumption of the edge devices. We show that by adoptingour approach for the Coral USB Accelerator, inference formedium-sized Transformers can be accelerated up to nearly10×while consuming 6 ×less energy. Further, for largeTransformers, our approach may be the only viable approachdue to the memory constraints associated with other edgedevices.REFERENCES[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances inneural information processing systems , vol. 30, 2017.[2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al. , “Language mod-els are few-shot learners,” Advances in neural information processingsystems , vol. 33, pp. 1877–1901, 2020.[3] J. Yu, Z. Wang, V . Vasudevan, L. Yeung, M. Seyedhosseini, and Y . Wu,“Coca: Contrastive captioners are image-text foundation models,” arXivpreprint arXiv:2205.01917 , 2022.[4] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-trainingof deep bidirectional transformers for language understanding,” 2018.[Online]. Available: https://arxiv.org/abs/1810.04805[5] I. Turc, M.-W. Chang, K. Lee, and K. Toutanova, “Well-read studentslearn better: On the importance of pre-training compact models,” arXivpreprint arXiv:1908.08962v2 , 2019.[6] Nvidia. Jetson nano module datasheet. [Online].Available: https:https://developer.nvidia.com/embedded/dlc/jetson-nano-system-module-datasheet[7] Intel. Intel neural compute stick 2. [Online]. Available: https://software.intel.com/en-us/neuralcompute-stick[8] Google. Coral AI, “Tensorflow models on the edge tpu,”2020. [Online]. Available: https://coral.ai/docs/edgetpu/models-intro/#compatibility-overview[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai,T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al. ,“An image is worth 16x16 words: Transformers for image recognitionat scale,” arXiv preprint arXiv:2010.11929 , 2020.[10] G. Menghani, “Efficient deep learning: A survey on making deeplearning models smaller, faster, and better,” arXiv:2106.08962 , 2021.[11] D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),”2016. [Online]. Available: https://arxiv.org/abs/1606.08415[12] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut-dinov, “Dropout: a simple way to prevent neural networks from over-fitting,” The journal of machine learning research , vol. 15, no. 1, pp.1929–1958, 2014.[13] A. Yazdanbakhsh, B. Akin, and K. K. Seshadri, “An evalua-tion of edge tpu accelerators for convolutional neural networks,”https://arxiv.org/abs/2102.10423, 2021.[14] M. E. Elbtity, P. S. Chandarana, B. Reidy, J. K. Eshraghian, and R. Zand,“Aptpu: Approximate computing based tensor processing unit,” IEEETransactions on Circuits and Systems I: Regular Papers , pp. 1–0, 2022.[15] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa,S. Bates, S. Bhatia, N. Boden, A. Borchers et al. , “In-datacenterperformance analysis of a tensor processing unit,” in Proc. of the 44thAnnual Int. Symp. on Comput. Architecture , 2017, pp. 1–12.[16] Google. Coral AI, “Edge tpu inferencing overview,” 2020. [Online].Available: https://coral.ai/docs/edgetpu/inference/[17] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow,A. Harp, G. Irving, M. Isard, Y . Jia, R. Jozefowicz, L. Kaiser,M. Kudlur, J. Levenberg, D. Man ́e, R. Monga, S. Moore, D. Murray,C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar,P. Tucker, V . Vanhoucke, V . Vasudevan, F. Vi ́egas, O. Vinyals,P. Warden, M. Wattenberg, M. Wicke, Y . Yu, and X. Zheng,“TensorFlow: Large-scale machine learning on heterogeneous systems,”2015, software available from tensorflow.org. [Online]. Available:https://www.tensorflow.org/[18] S. Kim, A. Gholami, Z. Yao, M. W. Mahoney, and K. Keutzer, “I-bert:Integer-only bert quantization,” in International conference on machinelearning . PMLR, 2021, pp. 5506–5518.[19] W. B. Dolan and C. Brockett, “Automatically constructing a corpusof sentential paraphrases,” in Proceedings of the Third InternationalWorkshop on Paraphrasing (IWP2005) , 2005.7 |
rqn2v1Ltgn0 | Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023Scaling Infrastructure to Support Multi-TrillionParameter LLM TrainingMikhail Isaev∗, Nic McDonald†, and Richard Vuduc∗∗Georgia Institute of Technology, Atlanta, GA, USA†NVIDIA, Salt Lake City, UT, USAAbstract —This paper discusses efficient system designs forLarge Language Model (LLM) scaling to up to 128 trillionparameters. We use a comprehensive analytical performancemodel to analyze how such models could be trained on currentsystems while maintaining 75% Model FLOPS Utilization (MFU).We first show how tensor offloading alone can be used todramatically increase the size of trainable LLMs. We analyzeperformance bottlenecks when scaling on systems up to 16,384GPUs and with models up to 128T parameters. Our findingssuggest that current H100 GPUs with 80 GiB of HBM enabledwith 512 GiB of tensor offloading capacity allows scaling to11T-parameter LLMs; and getting to 128T parameters requires120 GiB of HBM and 2 TiB of offloading memory, yielding 75%+MFU, which is uncommon even when training much smallerLLMs today.I. I NTRODUCTIONWe wish to consider what software and system configura-tions might permit existing Large Language Models (LLMs),now at about 1 trillion parameters [11], to scale with greaterefficiency to even larger model sizes. Our analysis is drivenby the continued success and efficacy of LLMs in a variety ofapplications [2], [3], [7], [11], [14], [16], [21] and motivatedby the observation that Model FLOPS Utilization (MFU)—acommon metric of efficiency for assessing how well special-ized Artificial Intelligence (AI) accelerators are utilized duringmodel training—can be as low as 50% or less [15].A significant improvement to MFU will be necessary toincrease model sizes by 10 ×(10 trillion parameters) or higheron architectures similar to current systems. With a spacerequirement of 20 bytes per parameter, to store just themodel’s weights and optimizer state we would need morethan 200 TB of memory. For a system based on NVIDIAH100 [12] Graphics Processing Unit (GPU) with 80 GiB ofhigh bandwidth memory (HBM) memory, we would need2,500 GPUs and a fully model-parallel implementation to trainsuch a model. No known model-parallelism technique at thisscale would be able to provide anywhere near 50% MFU.Motivated by this example, we aim to establish the sys-tem limitations that prevent us from training multi-trillionparameter models on large systems built using clusters of 8interconnected GPUs, similar to NVIDIA DGX and HGX.We start by presenting a methodology for choosing wellstructured multi-trillion parameter LLMs. Then, using our ownfast analytical performance model of transformer-based LLMtraining, we search a space of billions of system configurationsand execution strategies. This paper explains a few of ourfindings, which may be summarized as follows.1) Training a hundred-trillion parameter LLM is feasiblebut requires a secondary memory pool up to 1 TiB perGPU with a bandwidth of 100 GB/s bidirectionally.2) Strong scaling for a 1T model stalls around 12,288GPUs, as matrix multiply becomes small, inefficient, andunable to overlap with communication.3) Scaling beyond 10T models requires more first-levelmemory, with HBM size scaling with model size.4) Growing model and system size beyond 10T parametersand 10k GPUs demands a larger fast-network domainand more targeted software optimizations.Overall, we find it will be critical to co-design the LLM, soft-ware, and hardware to attain high performance and efficiency.II. E XPERIMENTS METHODOLOGYFor performance estimation we use Calculon [5], a fastopen source analytical model of LLM training performancethat we developed.1Calculon can estimate the time andresource usage for a given LLM, system configuration, andsoftware execution strategy in about 1 millisecond, allowingthe exploration of large design spaces having many billionsof such configurations. Calculon models LLM training withtensor parallelism (TP), pipeline parallelism (PP), and dataparallelism (DP), allowing searches to determine optimal split-parallelism configurations. The system specification describesan accelerator-based distributed system with a two-level mem-ory hierarchy connected to multiple networks shown on Fig. 1.H100HBM of floadmemH100HBM of floadmemH100HBM of floadmemH100HBM of floadmemH100HBM of floadmemH100HBM of floadmemH100HBM of floadmemH100HBM of floadmemNVSwitchCPUNICNICNICNICNICNICNICNICFig. 1. Node architecture used for system modeling.1The full description of Calculon will be available in a future paper.1To validate the accuracy of its modeling, Calculon wascompared against actual runs of Megatron LLM on NVIDIA’sA100-based Selene supercomputer [8]. Calculon achieves ahigh level of accuracy, with an average error of 3.4% and amaximum error of 7.25% on these validation runs as presenetdin Table I.TABLE IVALIDATION RESULTS COMPARING CALCULON ’S PERFORMANCEPREDICTION TO ACTUAL RUNS ON THE A100- BASED SELENE [8]22B 175B 530B 1TFull Selene 1.42 18.13 49.05 94.42Calculon 1.43 18.30 50.46 91.70Delta -0.40% -0.94% -2.88% -2.88%Seq+Sel Selene 1.10 13.75 37.83 71.49Calculon 1.17 13.92 35.09 67.74Delta -6.36% -1.24% 7.25% 5.24%We perform experiments that vary system size, modelsize, memory capacity, bandwidth, and NVLink domain sizes,working with the FP8 data format supported by H100. For eachsystem, we pick an execution strategy that considers multiplestate-of-the-art software optimizations [8], [11], [17]–[19] andpicks the best-performing one. Given the large search spaces,we cannot present our experiments fully and instead focus ona few of the most important trends we have discovered.Our analysis assumes a networked system of compute nodeswhose node-architecture is depicted in Fig. 1. It is similarto DGX or HGX in structure and connectivity. The onlydifference is the addition of offload memory attached to GPUin addition to HBM. Such memory can be connected viacompute express link (CXL), or hosted by Central ProcessingUnit (CPU) and made directly accessible from GPU, similarto NVIDIA’s Grace-Hopper [13].III. S ELECTION OF LLM C ONFIGURATIONSAn important parameter is the LLM’s aspect ratio , definedas the ratio between the hidden dimension of the transformerblock to the number of blocks (a.k.a., transformer layers).Some recent research claims the ideal aspect ratio is a constant128 [6], while others claim that the aspect ratio should increaseexponentially with the number of blocks [9]. Both of theseanalyses were performed on LLMs 2 to 5 orders of magnitudesmaller than today’s production LLMs. In the absence ofconsensus among the LLM experts, we follow the apparentcurrent practice suggested by Table II, which is to extrapolateaspect ratios linearly with the number of transformer blocks.Nevertheless, our analysis method would work for any scalingfunction.In scaling and shaping the LLM, one challenge is mappingthe models onto the available hardware. Some models, suchas GPT-3 [2] with its 175 billion parameters across 96 blocks,are designed with many dimensions as powers of two ormultiples of powers of two, making them well-suited to typicalsystem designs that are also commonly built in powers oftwo. Other models are not as easy to map. Turing-NLG [20]has 530 billion parameters across 105 blocks, which results inTABLE IIASPECT RATIOS OF CURRENT LLM S.Name Hidden # Blocks Aspect RatioGPT2-1.5B [16] 1600 48 33.3Jurassic-6.5B [10] 4096 32 128PaLM-8B [3] 4096 32 128GPT3-13B [2] 5140 40 128.5Megatron-40B [11] 6144 40 153.6PaLM-62B [3] 8192 64 128Chinchilla-64B [4] 8192 80 102.4GPT3-175B [2] 12288 96 128Jurassic-175B [10] 13824 76 181.9Megatron-309B [11] 16384 96 170.7TuringNLG-530B [20] 20480 105 195PaLM-540B [3] 18432 118 156Megatron-1T [11] 25600 128 200fewer possible mappings. PaLM [3] has 540 billion parametersacross 118 blocks, a prime number multiplied by 2, whichresults in even fewer.254 blocks 256 blocks0100200300400Time, sFW passBW passOptim stepPP bubbleFW recomputeTP commPP commDP commFig. 2. Performance comparison of two 11T models.To see the impact of such choices, Fig. 2 compares twosimilarly sized models of about 11 trillion parameters. Onehas a power of two number of blocks (256) and the otherhas a prime number multiplied by two (254). When mappedonto 4,096 processors, the 256-block model yields 15,612,832possible mappings while the 254-block model yields only842,080, or 18.5×fewer. Consequently, the 256-block modelends up being 36% faster, with an MFU of 75% compared tothe 254-block model’s MFU of 54%.Thus, we propose scaling the number of blocks and attentionheads with a step size that is a power of two. Doing so makesit easier to configure tensor and pipeline parallelism, yieldingbetter overall performance. Fig. 3 summarizes the model sizesfor a variety of aspect ratios. These models all result in manymillions of mapping solutions on various common systemdesigns and across many system sizes.The hidden step size shown in Fig. 3 is 8,192. However,when finding the optimal (closest to ideal aspect ratio), weuse a step size of 1,024. For the remainder of this paper weuse the model configurations found in Table III. All modelshave a sequence size of 8,192, the feed forward size is fixedto4×the hidden size, and the number of attention heads isequal to the number of blocks. For all experiments we limitedthe maximum batch size to 3,072.296128 160 192 224 256 288 320 352 384 416 448 480 512Number of transformer blocks1638424576327684096049152573446553673728819209011298304106496114688122880131072139264147456155648163840Hidden size303G171696G2561T3412T4273T5124T5975T6836T7688T8539T93911T1k13T1k15T1k17T1k20T1k22T1k25T2k28T2k31T2k412G128928G1922T2563T3204T3845T4487T5128T57610T64012T70415T76817T83220T89623T96026T1k30T1k33T1k37T1k41T1k505G1021T1542T2053T2565T3076T3588T41010T46113T51216T56318T61422T66625T71729T76833T81937T87042T92246T97352T1k644G851T1282T1714T2136T2567T29910T34113T38415T42719T46922T51226T55530T59735T64039T68345T72550T76856T81162T853691G732T1103T1465T1837T2199T25611T29314T32918T36622T40226T43931T47535T51240T54946T58552T62259T65865T69572T731825G642T963T1285T1607T19210T22413T25617T28821T32025T35230T38435T41640T44846T48053T51260T54467T57674T60882T640966G572T854T1146T1428T17111T19915T22819T25623T28428T31334T34140T37045T39852T42759T45567T48475T51284T54093T5691T512T774T1026T1289T15413T17916T20521T23026T25631T28238T30743T33350T35858T38466T41075T43583T46193T486103T5121T472T705T937T11610T14014T16318T18623T20928T23334T25641T27947T30355T32664T34973T37281T39692T419103T442115T4651T433T645T857T10711T12816T14919T17125T19231T21337T23545T25653T27760T29970T32080T34188T363100T384113T405123T4271T393T595T798T9812T11816T13822T15828T17733T19741T21747T23657T25667T27675T29586T31595T335108T354122T374133T3941T373T556T739T9112T11018T12824T14629T16537T18343T20152T21960T23871T25683T27492T293105T311116T329131T347143T3662T344T516T6810T8514T10219T11924T13732T15438T17147T18855T20566T22274T23987T256101T273111T290127T307139T324156T3412T324T487T6410T8015T9621T11226T12832T14441T16051T17659T19268T20881T22495T240106T256117T272134T288151T304165T320Model ratio and trillions of parameters for optimal scalingFig. 3. Linear scaling of the hidden size with number of transformer blocks, insteps of 8,192 for hidden size and 32 for number of blocks. Each cell containsmodel size and hidden to blocks ratio. Red color represents narrower models,blue color represents wider ones. Optimal choices are represented by whitecolor in the frame, model size and ratio in bold.TABLE IIITWELVE MULTI -TRILLION PARAMETER LLM S,FROM 1TTO128T.Name Hidden Attn Size # Blocks Aspect Ratio1T 24,576 192 128 1922T 32,768 205 160 204.84T 40,960 213 192 213.37T 50,176 224 224 22411T 60,416 236 256 23618T 70,656 245 288 24526T 81,920 256 320 25637T 94,208 268 352 267.653T 106,496 277 384 277.372T 119,808 288 416 28896T 134,144 299 448 299.4128T 148,480 309 480 309.31T2T4T7T11T18T26T37T53T72T96T128Tmodel sizes020406080100Efficiency, %(a) Without offloadingT otal EfficiencyCompute EfficiencySystem Efficiency1T2T4T7T11T18T26T37T53T72T96T128Tmodel sizes020406080100Efficiency, %(b) With offloadingT otal EfficiencyCompute EfficiencySystem EfficiencyTraining efficiency on 4096 GPUs Fig. 4. Comparison of LLM scaling on 4,096 GPUs with and without offloadmemory. Such memory enables high training efficiency beyond 100T models.IV. T ENSOR OFFLOADING FOR LLM S CALINGWhile scaling out LLMs using standard DGX/HGX H100swith 8 NVLink-connected GPUs is possible, achieving highperformance is not trivial. See, for instance, Fig. 4a, whichshows training efficiency while scaling up model size on afixed system size of 4,096 GPUs. Even the smallest modelsize, 1T, reaches only 60% efficiency and rapidly decays until18T where it can no longer run. The main scalability issueis the lack of memory to store weights and activations duringtraining. This in turn forces the use of activation recomputationand higher model parallelism. A large pipeline parallelism witha lack of spare memory forces an excessive time overheadin the form of a pipeline bubble. A large tensor parallelismbeyond the NVLink size of 8 increases communication timedue to a lack of bandwidth.These issues can be addressed by a secondary memory pool,where unused tensors from inactive transformer blocks canbe transferred and retrieved as needed [19]. This could beimplemented as CPU host memory, an array of PCIe-attachedSSDs, or CXL-attached memory. We consider training effi-ciency when using tensor offloading in Fig. 4b, where the per-GPU capacity is 1 TiB at infinite bandwidth. Evidently, withenough offloading capacity and infinite offloading bandwidth,we could train models at least up to 128T parameters.1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %4096 GPUs, offload memory at 50GB/s1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %16384 GPUs, offload memory at 50GB/s1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %4096 GPUs, offload memory at 100GB/s256GiB offload mem 512GiB offload mem 1024GiB offload mem 2048GiB offload mem1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %16384 GPUs, offload memory at 100GB/sRelative efficiency of offload memory compared to infinite bandwidth caseFig. 5. Efficiency of offload memory compared to infinite offload bandwidth.The effect of offloading capacity is compared in Fig. 5for 256 GiB, 512 GiB, 1 TiB, and 2 TiB. We see the rela-tive slowdown of using 50 GB/s and 100 GB/s of offloadingbandwidth per direction compared to infinite bandwidth. At50 GB/s on a 4,096 GPUs system, significant slowdowns occurwith increasing model sizes. At 100 GB/s, the majority of thesystems nearly match the performance of infinite bandwidth,suggesting it is a sufficient target bandwidth.Importantly, these tensor offload-memory requirements arewithin reach of current technology. Memory pools based onCXL 1.1 and CXL 2.0 with a capacity up to 2 TiB andbandwidth up to 89.6 GB/s are already available [1]. Systemsbased on NVIDIA’s Grace-Hopper [13] have up to 512 GiBof Low-Power Double Data Rate (LPDDR) memory with upto 546 GB/s bandwidth behind a CPU-to-GPU link, far aboveour offloading-requirement estimates.31T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %4096 GPUs1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %8192 GPUs1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %12288 GPUs256GiB offload mem 512GiB offload mem 1024GiB offload mem 2048GiB offload mem1T2T4T7T11T18T26T37T53T72T96T128T0102030405060708090100Efficiency, %16384 GPUsT otal training efficiency with 100GB/s offloading bandwidthFig. 6. Training efficiency with model and system scaling using offloadingmemory with infinite bandwidth. Green dash line indicates 75% MFU, reddash line indicates 50% MFU.The efficiency of training with offloading appears in Fig. 6for an offload bandwidth of 100 GB/s and capacities of256 GiB, 512 GiB, 1 TiB, and 2 TiB across 4,096, 8,192,12,288, and 16,384 GPUs. The major trends shown are:•Small models on large systems and large models on smallsystems lead to low efficiency.•Good efficiency occurs rarely at 256 GiB.•For 8k, 12k, and 16k GPUs, 512 GiB is mostly sufficient.•A 1 TiB capacity is nearly identical to 2 TiB.V. S TRONG SCALING4k GPUs; NVL 84k GPUs; NVL 168k GPUs; NVL 88k GPUs; NVL 16 12k GPUs; NVL 812k GPUs; NVL 1616k GPUs; NVL 816k GPUs; NVL 160510152025Time, sBatch timeFW passBW passOptim stepPP bubbleFW recomputeTP commPP commDP comm4k GPUs; NVL 84k GPUs; NVL 168k GPUs; NVL 88k GPUs; NVL 16 12k GPUs; NVL 812k GPUs; NVL 1616k GPUs; NVL 816k GPUs; NVL 1601020304050607080Size, GBHBM consumptionWeightActivationWeight gradientsActivation gradientsOptimizer space4k GPUs; NVL 84k GPUs; NVL 168k GPUs; NVL 88k GPUs; NVL 16 12k GPUs; NVL 812k GPUs; NVL 1616k GPUs; NVL 816k GPUs; NVL 1602565127681024Size, GBOffload mem consumptionWeightActivationWeight gradientsActivation gradientsOptimizer space1T model time and memory consumption breakdown for different system configurationsFig. 7. Batch time and memory consumption break down for 1T model strongscaling from 4,096 to 16,384 GPUs.In this section we analyze the strong scaling of the 1T pa-rameter model from 4,096 to 16,384 GPUs inspecting NVLinkdomain sizes of 8 and 16. For each system, we use Calculonto perform an exhaustive search over possible configurations,typically 10-30 million configurations per LLM-system pair.The results appear in Fig. 7. We analyzed all configurationassociated parameters such as TP, PP, DP split, microbatchsize, pipeline interleaving, among others, and summarize no-table trends below.Scaling up to 12,288 GPUs fares well but suffers at 16,384GPUs. An NVLink size of 8 is sufficient up to 12,228 GPUsbut 16 is needed for higher efficiency at 16,384 GPUs. Addingextra processors requires assigning them to tensor, pipeline, ordata parallelism, but each incurs some resource cost in timeor memory. We identified several reasons for a lack of scalingat 16,384 GPUs.1) When increasing TP the tensor may be divided too finelyto maintain a high compute efficiency on the GPU.2) When increasing TP the size of each message maybecome small enough to become latency dominated.3) When attempting to overlap TP communication andcomputation, increasing TP reduces the computation sizebut communication size remains the same. At particu-lar FLOPs/bandwidth ratios, the communication-hidingdecreases, reducing efficiency.4) When overlapping TP communication and computation,to sustain the high bandwidth of NVLink the GPUmust dedicate many cores to communication reducingits computational speed. Adding a specialized directmemory access (DMA)-like engine for communicationwould eliminate this overhead allowing optimal overlap.5) Increasing PP either increases the pipeline bubble over-head or requires more memory for higher levels ofinterleaving to reduce the pipeline bubble.6) Increasing DP increases memory due to replication.7) We constrain our models to have a maximum batch sizeof 3,072 to conserve the convergence properties of priorstudies. This choice limits the maximum available DPto 3,072, so that the rest must be either TP or PP.VI. S CALING MODELS BEYOND 10T P ARAMETERS1T 2T 4T 7T 11T 18T 26T 37T 53T 72T 96T 128T0102030405060708090100Efficiency, %T otal efficiency80GiB HBM256GiB offload mem120GiB HBM256GiB offload mem80GiB HBM512GiB offload mem120GiB HBM512GiB offload mem80GiB HBM1024GiB offload mem120GiB HBM1024GiB offload mem80GiB HBM2048GiB offload mem120GiB HBM2048GiB offload mem1T2T4T7T11T18T26T37T53T72T96T128T020406080100120Size, GBHBM consumptionWeightActivationWeight gradientsActivation gradientsOptimizer space1T2T4T7T11T18T26T37T53T72T96T128T025651276810241280153617922048Size, GBOffload mem consumptionWeightActivationWeight gradientsActivation gradientsOptimizer spaceTraining parameters on 4096 GPUs with various HBM and offloading memory sizesFig. 8. Efficiency and memory consumption for LLM training on 4,096 GPUs.Green dash line indicates 75% MFU, red dash line indicates 50%. Memoryconsumption presented for the 120 GiB HBM and 2 TiB offload memory.We also analyze the effects of increasing the model size to128T parameters for a system with a fixed number of GPUs.Fig. 8 shows the results for 4,096 GPUs with 80 GiB and120 GiB of HBM and 256 GiB, 512 GiB, 1 TiB, and 2 TiB of4offloading capacity. While scaling model training on 4,096GPUs works well with 80 GiB of HBM for models up to11T parameters, the HBM size must increase to 120 GiB toscale further, even when given extra offloading memory. Thereason is that there must be enough HBM memory to holdtwo transformer blocks—the one used in computation andthe one needed for offloading and prefetching—even withoffloading. During model scaling, the transformer block sizegrows mostly due to weights and activations. Unsurprisingly,offload memory capacity also needs to scale accordingly.Our experiments indicate that growing the HBM size to120 GiB and offload memory to 2 TiB is enough to scale to100T parameters. Past 11T parameter, models occupy most ofthe available memories. This indicates that further efficiencyimprovements are possible, either by providing more memory,or by increasing the size of the NVLink domain to reduceper-GPU weight space and increase local microbatch size.These experiments show that the proposed LLMs can scale upto 128T parameters while maintaining an MFU above 75%,which is better than typically seen on current systems for muchsmaller LLMs.VII. C ONCLUSIONOur co-design analysis reveals that it is feasible to traina well-structured multi-trillion parameter LLM efficiently at75% MFU or higher with an appropriate choice of softwareand hardware settings, including a secondary memory poolfor tensor offloading. We identify both optimal configurationstrategies and fundamental limitations under strong scaling(fixed model, increasing numbers of GPUs). And for a fixedsystem with 4,096 GPUs, we show how an 11T parametermodel could be trained with only tensor offloading, as well ashow to scale to 128T parameters using a 120 GiB HBM anda 2 TiB offloading memory.REFERENCES[1] AsteraLabs, “AsteraLabs Leo CXL memory connectivity platform,”2023. [Online]. Available: https://www.asteralabs.com/applications/memory-pooling/[2] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal,A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler,J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray,B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever,and D. Amodei, “Language models are few-shot learners,” in Pro-ceedings of the 34th International Conference on Neural InformationProcessing Systems , ser. NIPS’20. Red Hook, NY , USA: CurranAssociates Inc., 2020.[3] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts,P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi,S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y . Tay, N. Shazeer,V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury,J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe-mawat, S. Dev, H. Michalewski, X. Garcia, V . Misra, K. Robinson, L. Fe-dus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov,R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai,M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee,Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei,K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel, “Palm:Scaling language modeling with pathways,” 2022.[4] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai,E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark,T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc,A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, andL. Sifre, “Training compute-optimal large language models,” 2022.[5] M. Isaev and N. McDonald, “Calculon,” Oct. 2022. [Online]. Available:https://github.com/paragraph-sim/calculon[6] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child,S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws for neurallanguage models,” 2020.[7] D. M. Katz, M. J. Bommarito, S. Gao, and P. Arredondo, “Gpt-4 passesthe bar exam,” SSRN Electronic Journal , 2023. [Online]. Available:https://ssrn.com/abstract=4389233[8] V . Korthikanti, J. Casper, S. Lym, L. McAfee, M. Andersch,M. Shoeybi, and B. Catanzaro, “Reducing activation recomputationin large transformer models,” 2022. [Online]. Available: https://arxiv.org/abs/2205.05198[9] Y . Levine, N. Wies, O. Sharir, H. Bata, and A. Shashua, “Limits todepth efficiencies of self-attention,” in Advances in Neural InformationProcessing Systems , H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan,and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 22 640–22 651. [Online]. Available: https://proceedings.neurips.cc/paper files/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Paper.pdf[10] O. Lieber, O. Sharir, B. Lenz, and Y . Shoham, “Jurassic-1: Technicaldetails and evaluation,” AI21 Labs, Tech. Rep., Aug. 2021.[11] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary,V . Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro,A. Phanishayee, and M. Zaharia, “Efficient large-scale languagemodel training on gpu clusters using megatron-lm,” in Proceedingsof the International Conference for High Performance Computing,Networking, Storage and Analysis , ser. SC ’21. New York, NY ,USA: Association for Computing Machinery, 2021. [Online]. Available:https://doi.org/10.1145/3458817.3476209[12] NVIDIA, “Nvidia h100 tensor core gpu architecture,”2022. [Online]. Available: https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf[13] NVIDIA, “NVIDIA grace hopper superchip architecture whitepaper,”2023. [Online]. Available: https://resources.nvidia.com/en-us-grace-cpu/nvidia-grace-hopper[14] OpenAI, “Gpt-4 technical report,” 2023.[15] D. Patterson, J. Gonzalez, Q. Le, C. Liang, L.-M. Munguia,D. Rothchild, D. So, M. Texier, and J. Dean, “Carbon emissionsand large neural network training,” 2021. [Online]. Available:https://arxiv.org/abs/2104.10350[16] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever,“Language models are unsupervised multitask learners,” 2019.[17] S. Rajbhandari, J. Rasley, O. Ruwase, and Y . He, “Zero: Memoryoptimizations toward training trillion parameter models,” in Proceedingsof the International Conference for High Performance Computing,Networking, Storage and Analysis , ser. SC ’20. IEEE Press, 2020.[18] J. Rasley, S. Rajbhandari, O. Ruwase, and Y . He, “Deepspeed:System optimizations enable training deep learning models withover 100 billion parameters,” in Proceedings of the 26th ACMSIGKDD International Conference on Knowledge Discovery andData Mining , ser. KDD ’20. New York, NY , USA: Associationfor Computing Machinery, 2020, p. 3505–3506. [Online]. Available:https://doi.org/10.1145/3394486.3406703[19] J. Ren, S. Rajbhandari, R. Y . Aminabadi, O. Ruwase, S. Yang,M. Zhang, D. Li, and Y . He, “Zero-offload: Democratizing billion-scalemodel training,” in 2021 USENIX Annual Technical Conference,USENIX ATC 2021, July 14-16, 2021 , I. Calciu and G. Kuenning,Eds. USENIX Association, 2021, pp. 551–564. [Online]. Available:https://www.usenix.org/conference/atc21/presentation/ren-jie[20] S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari,J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V . Korthikanti,E. Zhang, R. Child, R. Y . Aminabadi, J. Bernauer, X. Song,M. Shoeybi, Y . He, M. Houston, S. Tiwary, and B. Catanzaro,“Using deepspeed and megatron to train megatron-turing nlg 530b,a large-scale generative language model,” 2022. [Online]. Available:https://arxiv.org/abs/2201.11990[21] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux,T. Lacroix, B. Rozi `ere, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez,A. Joulin, E. Grave, and G. Lample, “Llama: Open and efficientfoundation language models,” 2023.5 |
OHlhibN1Cr | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023DOSA: One-Loop DSE for DNN AcceleratorsUsing Differentiable ModelsCharles Hong∗, Qijing Huang†, Grace Dinh∗, Yakun Sophia Shao∗,∗University of California, Berkeley. charleshong@berkeley.edu,dinh@berkeley.edu,ysshao@berkeley.edu†NVIDIA. jennyhuang@nvidia.comAbstract —The process of hardware design space explorationrequires both hardware parameters and mappings from the algo-rithm onto the target hardware to be discovered and optimized.Previous work has largely approached this simultaneous opti-mization problem by separately exploring the hardware designspace and the mapspace—both individually large and highlynonconvex spaces—independently. The resulting combinatorialexplosion has created significant difficulties for optimizers. Weintroduce DOSA, which consists of differentiable latency andenergy models, as well as a gradient descent-based optimizationtechnique to simultaneously explore both spaces and identifyhigh-performing design points. Experimental results demonstratethat DOSA outperforms random search and Bayesian optimiza-tion by 2.80 ×and 12.59 ×, respectively, in improving DNN modelenergy-delay product, given a similar number of samples. Inparticular, we demonstrate DOSA’s modularity and flexibilityvia transfer to a real DNN accelerator setting, where we achievea 2.07×improvement in energy-delay product by augmentingour analytical model with a learned model.I. I NTRODUCTIONTo develop efficient and high-performance DNN acceler-ators in a fast and cost-effective manner, automated designspace exploration (DSE) has emerged as a powerful technique.The hardware DSE flow [15], [20], [26] involves optimizingover two search spaces: the hardware design space , whichdescribes hardware design parameters such as interconnecttopology and buffer and systolic array sizes, and the mapspace ,which describes how applications are executed on the targethardware and encompasses decisions such as loop tile dimen-sions, dataflow, and spatio-temporal mapping.Both the hardware design and mapping spaces are vast,high-dimensional, and comprised of both categorical and dis-crete variables. Furthermore, evaluating the performance of ahardware configuration and a mapping can be computationallyexpensive. The size of the combined optimization space andthe cost of evaluating points in it pose formidable challengesto DSE algorithms.Much prior work [6], [17], [23], [26], [30], [33] hasapproached this problem using hardware-first search . Thesemethods directly search over the space of possible hardwareconfigurations. The performance of each hardware config-uration is calculated by first constraining the mapspace tomappings that are compatible with the hardware configuration,then optimizing over the constrained (highly discontinuous)mapspace. In most cases, the mapspace optimization is doneiteratively, rendering this process a two-loop approach iteratingMapspace SearchHW configMapspace SearchHW configMapspace SearchMappingPerfMappingPerfMappingPerfHW configOptimization LoopOptimization LoopMappingHW configPerfMappingHW configPerfMappingHW configPerfOptimization LoopFig. 1. Hardware-first, two-loop (left) and mapping-first, one-loop (right) DSEapproaches.over both the hardware space and mapspace. As a result, theseapproaches must contend with a combinatorial explosion ofpossible configurations.Alternatively, mapping-first approaches, as proposed in [10],[31], optimize primarily over the mapspace. For each mapping,optimizing over the hardware design space is a straightforwardprocess consisting of finding the minimal hardware configu-ration capable of supporting the mapping. As a result, theloop for hardware search is eliminated, allowing the entireDSE process to be encapsulated in a single loop. Furthermore,the lack of hardware resource constraints also significantlysimplifies the mapspace search problem.Despite these advantages, mapping-first approaches muststill contend with the size of the mapspace and the noncon-vexity of the performance over this space. Prior works haveeither directly applied black-box optimization methods [10],[25], which rely on a large number of samples, or pruned thesearch space using architecture-specific heuristics [31], leavinga large set of candidate design points for the architect to selectfrom.Rather than optimize a black box, we can leverage the do-main knowledge provided by white-box performance modelslike Timeloop [20], [28], which can provide precise feedbackquickly enough to fully automate the design space explorationprocess. This paper follows this approach, using performancemodels as an optimization target for mapping-first search.Specifically:•We build closed-form differentiable and interpretable1TABLE ISTATE -OF-THE-ART ACCELERATOR DSE METHODS .NameMapspaceSearchHardwareSearchTwo-loopSearchersSpotlight [23] BB-BO BB-BOV AESA [6] ILP [7] V AE+BB-BO/GDFAST [33] BB-LCS [12] + ILP BB-LCSHASCO [30] BB-RL BB-BONAAS [17] BB-ES BB-ESMAGNet [26] Heuristics BB-BOOne-loopSearchersDiGamma [10] BB-GAInterstellar [31] HeuristicsOur work: DOSA GDperformance models for latency and energy on DNNaccelerators. Our models are as precise as the program-based analytical models, while also being amenable towhite-box optimization using gradient descent.•We introduce a DNN model to predict the variationbetween analytical model and real hardware accelera-tor performance, augmenting our interpretable analyticalmodels for real hardware DSE.•We then introduce DOSA, a mapping-first one-loop DSEflow that uses gradient descent to find the most efficienthardware parameters and mappings for target multi-layerDNNs; to the best of our knowledge, this is the first workto perform mapping-first DSE to multi-layer neural nets.DOSA converges at least 40% faster than state-of-the-artDSE approaches.•We benchmark our results on the Gemmini accelerator,showing a 2.07 ×EDP improvement over hand-designedconfigurations.II. B ACKGROUNDHardware DSE typically includes two key optimizations:the mapping search and the hardware search. To address themapping complexity for DNNs, many DNN compilers [1]–[3], [13], [19], [21], [22] and accelerator-aware mappingtechniques [5], [7], [8], [16], [20], [31] have been developed.In addition, there has been extensive research in the area ofhardware parameter search [14], [32].A. Optimization TechniquesIn recent years, there has been a growing body of researchfocused on tackling the compounding search space of mappingand hardware designs with the goal of achieving higherhardware efficiency and lower development costs.Optimization techniques used in this search process canbe broadly categorized into three types: heuristics, black-box optimization (BB), and white-box optimization. Heuristicsinvolve using domain-specific knowledge and experience toguide the search process and reduce the size of design space.In contrast, BB relies on sampling and machine learning tech-niques to leverage the characteristics of the problem derivedfrom sampled data in order to find the optimal solution. Inwhite-box optimization, the relationship between the opti-mization variables and the objectives is known and capturedin mathematical models. Numerical optimization techniqueslike linear programming (LP) and mixed-integer programming(MIP) can be used if the relationship can be expressed inspecific frameworks. Gradient descent (GD) techniques can beapplied if the relationship can be expressed in a differentiableexpression. Compared to black-box optimization, white-box(WB) optimization is generally more efficient as it can exploitthe known objective model to guide the optimization process,resulting in faster convergence. However, it requires the ob-jective model to be known and accurately specified.B. Co-exploration FrameworksAs shown in Table I, most prior work [6], [17], [23], [26],[30], [31], [33] treats the mapping and hardware co-searchas a two-loop process and applied a combination of variousoptimization techniques to address each search space indepen-dently. While independently applying optimization techniquesto the mapspace and hardware space can be effective, the two-loop searchers can be susceptible to combinatorial explosion,as the vast search space multiplies the number of potentialoptions for mappings and hardware parameters together.To reduce the size of the compounding search space,one-loop searchers, such as DiGamma [10] and Interstel-lar [31] have been proposed. Single-loop search tackles the co-search problem from a mapping-first approach that infers theminimal hardware requirement from hardware-agnostic high-performance mappings found in single-loop mapping search.In such approaches, the hardware DSE space is similar insize to the mapping space. However, DiGamma employs BB-GA which treats the mapping performance as a black-boxand needs to evaluate many unique hardware and mappingconfigurations iteratively to achieve a good mapping andhardware design. Interstellar, on the other hand, only exploresa limited space of pre-selected mappings and as a result onlya limited space of hardware design is evaluated.Unlike previous one-loop approaches that rely on black-box optimizations or heuristics, DOSA takes a novel approachby formulating the analytical performance and energy modelin [20] as a differentiable white-box model. DOSA usesgradient descent to optimize the mapping variables in thedirection of steepest descent of the EDP objective functionon the mathematical model. This allows DOSA to explore acomprehensive set of mappings and efficiently generate high-quality hardware and mapping configurations without the needfor sampling from simulators extensively.III. DOSA O VERVIEWThis paper presents DOSA, a one-loop differentiable-model-based DSE framework to optimize the mappings and hardwaresimultaneously for target DNN models. DOSA captures keyrelations between DNN mapping factors and performanceobjectives in a differentiable analytical model. In addition,DOSA introduces a data-driven DNN model to capture theperformance variations between analytical model and realhardware. By applying white-box optimization to the modeland calculating the hardware parameters using minimal pa-rameterization, DOSA achieves high-performance accelerator2Neural NetLayer 1Layer 2Layer N⋮Layer 1EDP Predictor Find minimal HW ConfigAccess CountsUpdate Mappings}+{{+{×Energy PredictorLatency PredictorMapping N⋮Mapping 1Mapping 2Fig. 2. An architecture diagram of DOSA.Per-Layer Min-HW // Layer: // N=1, R=1, S=1, P=56, Q=56, C=64, K=64 // DRAM (Weights: 4096 Inputs: 200704 Outputs: 200704) for p3 in [0:56): for q3 in [0:4): // Scratchpad (Weights:4096, Inputs: 896) spatial_for k2 in [0:64): // Accumulator (Outputs:896) spatial_for c1 in [0:64): // Registers (Weights: 4096) for q0 in [0:14): PEs: 64x64 Accumulator: 896 words x 4 B/word ≈ 4 KB Scratchpad: (4096 + 896) words x 1 B/word ≈ 5 KB Gemmini max ( · ) Fig. 3. Mapping to hardware parameters conversion in DOSA.design and mapping while significantly reducing the time andcosts associated with DNN accelerator DSE.Toolflow. Figure 2 shows how DOSA simultaneously opti-mizes mappings and hardware for a given workload consistingof a set of layers. DOSA iteratively updates mappings usinggradient descent, finding minimal hardware requirements ateach step. Specifically:1)Initialize the search. We first select a random validhardware parameters and use CoSA [7], a one-shotoptimization-based mapper, to map our set of targetDNN layers onto it.2)Find minimal hardware parameters. We compute thehardware resource requirements of each layer-wise map-pings and set hardware parameters to support all map-pings.3)Compute EDP cost from current mapping and hardwareparameters. We first compute the number of accessesmade by each mapping to each memory level in theaccelerator using the differentiable model in DOSA.These access counts are combined with our current set ofhardware parameters to generate roofline-based latencypredictions and event-based energy predictions for eachlayer’s mapping, from which we derive a single EDPprediction.4)Update all mappings in parallel using gradient descent.5)Repeat from Step 2.The components that make up this flow are detailed in thefollowing sections.IV. DOSA D IFFERENTIABLE MODELGiven the absence of a differentiable, analytical modelfor DNN accelerators in current literature, we present ourapproach for constructing such a model that achieves accuracyon par with Timeloop in our problem space. To account forperformance variations in real hardware that are difficult tocapture and express in analytical models, we additionally traina differentiable DNN model to futher improve the accuracy ofthe performance model.We target the open-source DNN accelerator Gemmini [4],whose most notable architectural components are 1) a systolicarray of processing elements (PEs), 2) accumulator SRAM, 3)scratchpad SRAM, and 4) DRAM. Specifically, we target theweight-stationary (WS) configuration of Gemmini.We evaluate Gemmini at two levels of fidelity: Timeloop andRTL. We use Gemmini-TL to refer to the Timeloop definitionof an accelerator analogous to Gemmini, evaluated with anarchitectural model, and Gemmini-RTL to refer to the RTL im-plementation of Gemmini available at https://github.com/ucb-bar/gemmini, evaluated using cycle-accurate RTL simulation.A. Computing Hardware Resource RequirementsAs depicted in Figure 3, the capacity requirements at eachlevel are first computed. Then, we take a parameter-wise maxto generate a design that will support all current mappings. Theexact formulas used are not enumerated here, but its accuracyrelative to Timeloop is demonstrated in Section IV-E.1) PE Capacity Requirements: Gemmini supports onlysquare arrays of processing elements. In its WS (weightstationary) configuration, it can parallelize the input channeland output channel dimensions, each along one side of thearray. Hence, we need to configure a square PE array that islarge enough to accommodate the square of the larger of thesetwo spatial factors.2) Buffer Capacity Requirements: Buffer capacities re-quired at a given level for the weight ( W), input ( I), andoutput ( O) tensors are computed from the spatial and temporaltiling factors at and inner to that level. The total buffer capacityrequirement at a given memory level is the sum of sizes ofeach tensor stored at that level.B. Traffic EstimationTo capture the performance of the accelerator, we utilizedifferentiable, but non-convex functions to model the writes,updates, and reads to each buffer level. These values arecomputed per mapping.C. Latency ModelingWe calculate the latency cycles required for compute bydividing the total number of multiply-accumulate (MAC)operations in a layer by the product of all spatial factors in30.0 0.5 1.0 1.5 2.0 2.5Timeloop EDP (uJ * cycles) 1e110.00.51.01.52.02.5Differentiable Model EDP (uJ * cycles)1e11 EDP Correlation (r^2 = 1.0)Fig. 4. Correlation of DOSA differentiable modelwith Gemmini-TL.101103105107109Gemmini-RTL Latency (cycles, log-scale)101102103104105106107108109Model Predicted Latency (cycles, log-scale)Analytical Model Only (r^2 = -0.28)101103105107109Gemmini-RTL Latency (cycles, log-scale)101102103104105106107108109Model Predicted Latency (cycles, log-scale)Analytical+DNN Model (r^2 = 0.55)Fig. 5. Accuracy of Gemmini-RTL latency modeling with and without DNN augmentation.0 2000 4000 6000 8000 10000Timeloop/Differentiable Model Samples10101011Energy-delay product (uJ x cycles, log-scale)BERT Co-SearchDOSARandomBB-BO0 2000 4000 6000 8000 10000Timeloop/Differentiable Model Samples101110121013Energy-delay product (uJ x cycles, log-scale)ResNet-50 Co-SearchDOSARandomBB-BO0 2000 4000 6000 8000 10000Timeloop/Differentiable Model Samples101110121013Energy-delay product (uJ x cycles, log-scale)RetinaNet Co-SearchDOSARandomBB-BO0 2000 4000 6000 8000 10000Timeloop/Differentiable Model Samples101410151016Energy-delay product (uJ x cycles, log-scale)U-Net Co-SearchDOSARandomBB-BOFig. 6. DOSA EDP optimization of Gemmini-TL on 4 distinct workloads, versus baselines. Each line represents the mean (across 5 runs) best point foundafterxmodel evaluations. The shaded regions represent a 95% confidence interval across the 5 runs.a mapping (i.e., the number of parallel processing elementsutilized). To compute memory access latency, we divide thetotal number of memory accesses by the memory bandwidth.We calculate the memory latency for each memory level iutilized in Gemmini, including accumulator SRAM, scratch-pad SRAM, and DRAM. We consider the maximum latencyamong all memory levels and the compute as the final latencysince performance is limited either by memory or compute.The latency formulations are provided below:Compute Latency =# of MACs in LayerQ(Spatial Factors)Accesses (i) = Reads (i) +Updates (i) +Writes (i)Mem Latency (i) =Accesses (i)Bandwidth (i)Mem Latency = maxi(Mem Latency (i))Latency = max( Compute Latency,Mem Latency )D. Energy ModelingEnergy is modeled via data collected for a 40nm processusing Accelergy [28] and CACTI [18]. In our model, compute,register access, and DRAM access energy are constant perword, whereas SRAM access energy per word scales with thenumber of SRAM rows and columns.Energy (i) = Accesses (i)×EPA(i)Energy =MACs×EPAPE+Xi∈MEnergy (i)E. Modeling Gemmini-TLTo demonstrate that our differentiable performance modeldoes not compromise accuracy relative to other analyticalmodels, we conducted experiments to show that DOSA hasequivalent accuracy to Timeloop [20] for an accelerator analo-gous to Gemmini. We compare 100 random Gemmini configu-rations, 73 matrix multiplication and convolutional layers, and10 random mappings per layer. Figure 4 demonstrates that thelatency, energy, and EDP results from our differentiable modelcorrelate almost perfectly with the results from Timeloop.F . Modeling Gemmini-RTLIn general, analytical models do not completely capturehardware performance [27], [29]. Variations caused by spe-cific implementation details and complex hardware-softwareinteractions may be unknown to the designer or difficult tocapture mathematically. One potential solution is to augmentanalytical models with learned surrogate models. Since manylearned models, such as deep neural networks or polynomialregression models, are differentiable, DOSA is particularlywell-suited to work with such models.4In this case, we train a deep neural network to predict thedifference between our analytical model’s latency predictionsfor a layer and the real latency of Gemmini-RTL, evaluatedusing FireSim [11]. The model’s inputs include the layer’sdimensions, a mapping, and a hardware configuration. Theinputs are also augmented with the roofline latencies computedin the analytical model. The model’s architecture is similar tothe that of the model used in Mind Mappings [5]. It contains 7hidden fully-connected layers and a total of 5737 parameters.V. DOSA O PTIMIZATIONConstructing a differentiable performance model allowsDOSA to optimize hundreds of parameters (tens per layer,times tens of layers) at once using gradient descent (GD).Differentiability is implemented using PyTorch automatic dif-ferentiation. GD start points are generated via random hard-ware configuration, plus CoSA [7] mappings. The GD lossterm is the total performance metric, for example energy-delay product (EDP). We compute the EDP of a full modelby summing each mapping’s latencies and energies, thenmultiplying the resulting sums.Loop Ordering. Notably, the loop ordering term of themapping is not differentiated in our formulation. The loopordering of each layer is shuffled every time mappings arerounded to the nearest valid mapping, and the best differen-tiable model-predicted loop ordering is selected. We select be-tween 3 loop orderings per layer, per level: weight-stationary,output-stationary, and input-stationary. As noted by works suchas AIRCHITECT [24], it is possible to accurately analyzeproblem dimensions and buffer sizing to select the optimaldataflow. Others have noted that dataflow and loop orderingare much less impactful than tiling parameters in the mappingproblem [9], [31]. This makes the hardware-mapping co-design problem more amenable to gradient descent. Anotherway to optimize non-differentiable terms is to add a neuralnetwork-based surrogate model that can propagate a gradientto these terms, as shown later in this work.Start Point Rejection. In subsequent iterations of start pointgeneration, a start points is rejected, and a new hardwareconfiguration is selected, if its differentiable model-predictedperformance is more than 10 ×that of the best start point seenthus far.Rounding. Since gradient descent may result in non-integertiling factors, before any mapping is evaluated, it is roundedto the nearest valid mapping by rounding each tiling factor tothe nearest divisor of its corresponding problem dimension,subject to the constraint that the rounding process does notcause the product of tiling factors for that dimension to exceedthe total problem size. This process iterates from the innermostto outermost memory level.Preventing Exploration of Invalid Mappings. We donot include tiling factors at the outermost (DRAM) level asoptimization targets, and instead infer them by dividing thetotal problem size at each dimension by the product of therest of the tiling factors for that dimension. In order to preventexploration outside the space of valid mappings, we add a lossterm for DRAM tiling factors less than 1.VI. E VALUATIONA. Experimental SetupWe compare the performance of DSE algorithms on avariety of target DNN models that can handle a diverseset of tasks, such as natural language processing, imageclassification, object detection, and image segmentation. Thehardware parameters we select using the capacity requirementcalculations in Section IV are PE dimensions, accumulatorSRAM sizing, and scratchpad SRAM sizing. PE dimensionscome from spatial tiling factors, which can be directly used asthey are always positive integers. PE array size is capped at128x128. SRAM sizes are rounded up to increments of 1 KB.For these experiments, the specific descent algorithm DOSAuses is Adam, an optimizer similar to gradient descent withmomentum. Rounding happens every 500 steps and GD is runfor 1490 steps on each start point, unless otherwise noted.We set up CoSA with equally partitioned scratchpad for in-puts and weights. Our Bayesian optimization-based hardware-mapping optimizer baseline is similar to Spotlight [23]. Thisis a two-loop method which trains a Gaussian process modelwith 100 hardware designs and 100 mappings per layer perhardware design, and uses this model to selected the hardwaredesign and mappings with the best predicted performance from10000 candidates per problem.B. Hardware-Mapping Co-Search PerformanceOur evaluation finds that DOSA is able to identify signif-icantly more performant co-design points than either randomsearch or Bayesian optimization with a similar number ofsamples. BB-BO uses Timeloop simulation as a black-boxoptimization metric for Gemmini-TL. The random search- andDOSA-generated co-design points are also evaluated underthis setup. After around 10,000 model evaluations, the geo-metric mean of EDP improvements for DOSA versus randomsearch is 2.80 ×, and 12.59 ×versus BO. Evaluations doneusing Timeloop are considered equivalent to evaluations doneusing DOSA’s differentiable model.C. Gemmini-RTL Optimization with DOSAIn this section, we assess the efficacy of our one-loopdifferentiable-model-based gradient descent approach for realhardware design. We also enhance the analytical model with aDNN model to narrow the performance gap between analyticalmodel predictions and actual hardware performance.After modifying DOSA latency predictors to include thelearned latency model, we run gradient descent and generatea predicted optimal set of mappings and buffer sizes for 16x16PE Gemmini-RTL, fixing PE dimensions and adjust onlybuffer sizing and mappings. We compare the performance ofDOSA-generated mappings to the mappings generated by theGemmini-RTL default heuristic-based mapper, and the defaultscratchpad and accumulator sizings of 128 KB and 32 KBrespectively. We run DOSA in two settings. First, we utilize the5BERT ResNet-50 U-Net RetinaNet GEOMEAN0.00.20.40.60.81.01.21.4Energy-delay product (normalized)Gemmini-RTL OptimizationGemmini DefaultDOSA AnalyticalDOSA Analytical+DNNFig. 7. DOSA augmented with a learned Gemmini-RTL performance model(”DOSA Analytical+DNN”) finds more performant hardware-mapping co-design points than Gemmini’s default hardware and mapper or DOSA withanalytical model only (”DOSA Analytical”).original formulation from Section IV to generate scratchpadand accumulator sizings, along with corresponding mappings,for each workload. Second, we replace the original analyti-cal model-based latency predictor with the DNN-augmentedversion. We initialize gradient descent with Gemmini-RTLdefault buffer sizings, plus CoSA mappings. Over the fourtest workloads, which are not included in the training datafor the latency predictor, the analytical-only version of DOSAachieves a 1.35 ×EDP improvement, and the Gemmini-RTLtrained version of DOSA yields a 2.07 ×improvement, asshown in Figure 7.VII. C ONCLUSIONIn this paper, we present DOSA, a model-based approachmapping-first DSE. By constructing a differentiable analyticalperformance model for a DNN accelerator, we can use gradientdescent to perform an efficient one-loop co-search of both thehardware and mapping spaces. This enables us to to performDSE targeting multi-layer neural net workloads, attaining anEDP 2.80 ×better than random search and 12.59 ×betterthan Bayesian optimization, while using a similar number ofsamples.DOSA demonstrates that interpretable, designer-trusted ar-chitectural modeling and ML-based optimization methods arenot necessarily mutually exclusive, and in fact can be com-bined to improve the accuracy of performance models andthe convergence of DSE. The modular construction of ourperformance model enables DOSA to be more easily extendedto different performance objectives than existing performancemodeling and optimization methodologies. We demonstratethis principle by pairing our analytical latency model with oneexperimentally trained on a RTL simulation and event-basedenergy analysis of Gemmini, improving EDP by 2.07 ×overthe default Gemmini configuration.This modular approach to building and combining perfor-mance models and using different sources of performancedata for DSE suggests an avenue of attack for optimizingobjectives that are expensive to compute (e.g. fine-grainedperformance simulations) where prior black-box approachesmay face challenges. With this work, we move one step closerto bridging the gap between architectural models and realsilicon.ACKNOWLEDGEMENTSThis work was supported in part by the UC Berkeley SLICELab industrial sponsors. We thank Hasan Genc and DivijaHasteer for their help with Gemmini, and Xiangyu Xu andJonathan Wang for their help generating baseline data.REFERENCES[1] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga,S. Moore, D. G. Murray, B. Steiner, P. A. Tucker, V . Vasudevan,P. Warden, M. Wicke, Y . Yu, and X. Zhang, “Tensorflow: a systemfor large-scale machine learning,” in USENIX Symposium on OperatingSystems Design and Implementation (OSDI) , 2016.[2] T. Chen, M. Li, Y . Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu,C. Zhang, and Z. Zhang, “Mxnet: A flexible and efficient machinelearning library for heterogeneous distributed systems,” arXiv preprintarXiv:1512.01274 , 2015.[3] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Q. Yan, H. Shen, M. Cowan,L. Wang, Y . Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy, “Tvm: Anautomated end-to-end optimizing compiler for deep learning,” in 13thUSENIXSymposium on Operating Systems Design and Implementation({OSDI}18), 2018, pp. 578–594.[4] H. Genc, S. Kim, A. Amid, A. Haj-Ali, V . Iyer, P. Prakash, J. Zhao,D. Grubb, H. Liew, H. Mao, A. Ou, C. Schmidt, S. Steffl, J. Wright,I. Stoica, J. Ragan-Kelley, K. Asanovic, B. Nikolic, and Y . S. Shao,“Gemmini: Enabling systematic deep-learning architecture evaluationvia full-stack integration,” in 2021 58th ACM/IEEE Design AutomationConference (DAC) , 2021, pp. 769–774.[5] K. Hegde, P.-A. Tsai, S. Huang, V . Chandra, A. Parashar, and C. W.Fletcher, “Mind mappings: enabling efficient algorithm-accelerator map-ping space search,” in Proceedings of the International Conferenceon Architectural Support for Programming Languages and OperationSystems (ASPLOS) , 2021.[6] Q. Huang, C. Hong, J. Wawrzynek, M. Subedar, and Y . S. Shao,“Learning a continuous and reconstructible latent space for hardwareaccelerator design,” in Proceedings of the International Symposium onPerformance Analysis of Systems and Software (ISPASS) , 2022.[7] Q. Huang, M. Kang, G. Dinh, T. Norell, A. Kalaiah, J. Demmel,J. Wawrzynek, and Y . S. Shao, “CoSA: Scheduling by Constrained Op-timization for Spatial Accelerators,” in Proceedings of the InternationalSymposium on Computer Architecture (ISCA) , 2021.[8] S.-C. Kao and T. Krishna, “GAMMA: Automating the HW Mapping ofDNN Models on Accelerators via Genetic Algorithm,” in Proceedingsof the International Conference on Computer-Aided Design (ICCAD) ,2020.[9] S.-C. Kao, A. Parashar, P.-A. Tsai, and T. Krishna, “Demystifying mapspace exploration for npus,” in 2022 IEEE International Symposium onWorkload Characterization (IISWC) , 2022, pp. 269–281.[10] S.-C. Kao, M. Pellauer, A. Parashar, and T. Krishna, “Digamma: domain-aware genetic algorithm for hw-mapping co-optimization for dnn accel-erators,” in 2022 Design, Automation & Test in Europe Conference &Exhibition (DATE) . IEEE, 2022, pp. 232–237.[11] S. Karandikar, H. Mao, D. Kim, D. Biancolin, A. Amid, D. Lee,N. Pemberton, E. Amaro, C. Schmidt, A. Chopra, Q. Huang, K. Kovacs,B. Nikolic, R. Katz, J. Bachrach, and K. Asanovic, “Firesim: Fpga-accelerated cycle-exact scale-out system simulation in the public cloud,”in2018 ACM/IEEE 45th Annual International Symposium on ComputerArchitecture (ISCA) , 2018, pp. 29–42.[12] J. Karro, G. Kochanski, and D. Golovin, “Black box optimization via abayesian-optimized genetic algorithm,” Proc. OPTML , p. 10th, 2017.[13] F. Kjolstad, S. Kamil, S. Chou, D. Lugato, and S. Amarasinghe, “Thetensor algebra compiler,” in Proceedings of the International Conferenceon Object Oriented Programming Systems Languages and Applications(OOPSLA) , 2017.6[14] A. Kumar, A. Yazdanbakhsh, M. Hashemi, K. Swersky, and S. Levine,“Data-driven offline optimization for architecting hardware accelerators,”inWorkshop on ML for Systems at the Conference on Neural InformationProcessing Systems (NeurIPS) , 2021.[15] H. Kwon, P. Chatarasi, V . Sarkar, T. Krishna, M. Pellauer, andA. Parashar, “Maestro: A data-centric approach to understand reuse,performance, and hardware cost of dnn mappings,” in Proceedings ofthe International Symposium on Microarchitecture (MICRO) , 2020.[16] R. Li, Y . Xu, A. Sukumaran-Rajam, A. Rountev, and P. Sadayappan,“Analytical characterization and design space exploration for optimiza-tion of cnns,” in Proceedings of the International Conference on Archi-tectural Support for Programming Languages and Operation Systems(ASPLOS) , 2021.[17] Y . Lin, M. Yang, and S. Han, “Naas: Neural accelerator architecturesearch,” in Design Automation Conference (DAC) , 2021.[18] N. Muralimanohar, R. Balasubramonian, and N. Jouppi, “Cacti 6.0: Atool to model large caches,” HP Laboratories , 01 2009.[19] NVIDIA. (2018) TensorRT: https://developer.nvidia.com/tensorrt.[20] A. Parashar, P. Raina, Y . S. Shao, Y .-H. Chen, V . A. Ying, A. Mukkara,R. Venkatesan, B. Khailany, S. W. Keckler, and J. Emer, “Timeloop: Asystematic approach to dnn accelerator evaluation,” in Proceedings ofthe International Symposium on Performance Analysis of Systems andSoftware (ISPASS) , 2019.[21] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan,T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. K ̈opf,E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner,L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” Advances in neural informationprocessing systems , vol. 32, 2019.[22] A. Sabne, “Xla: Compiling machine learning for peak performance,”2020.[23] C. Sakhuja, Z. Shi, and C. Lin, “Learning a continuous and recon-structible latent space for hardware accelerator design,” in InternationalSymposium on High-Performance Computer Architectural (HPCA) .IEEE, 2023.[24] A. Samajdar, J. M. Joseph, M. Denton, and T. Krishna, “Airchitect:Learning custom architecture design and mapping space,” 2023.[25] Z. Shi, C. Sakhuja, M. Hashemi, K. Swersky, and C. Lin, “Usingbayesian optimization for hardware/software co-design of neural accel-erators,” in Workshop on ML for Systems at the Conference on NeuralInformation Processing Systems (NeurIPS) , 2020.[26] R. Venkatesan, Y . S. Shao, M. Wang, J. Clemons, S. Dai, M. Fojtik,B. Keller, A. Klinefelter, N. Pinckney, P. Raina, Y . Zhang, B. Zimmer,W. J. Dally, J. Emer, S. W. Keckler, and B. Khailany, “Magnet: Amodular accelerator generator for neural networks,” in Proceedingsof the International Conference on Computer-Aided Design (ICCAD) ,2019.[27] I. Wang, P. Chakraborty, Z. Y . Xue, and Y . F. Lin, “Evaluation ofgem5 for performance modeling of arm cortex-r based embeddedsocs,” Microprocessors and Microsystems , vol. 93, p. 104599, 2022.[Online]. Available: https://www.sciencedirect.com/science/article/pii/S0141933122001442[28] Y . N. Wu, J. S. Emer, and V . Sze, “Accelergy: An architecture-level energy estimation methodology for accelerator designs,” in 2019IEEE/ACM International Conference on Computer-Aided Design (IC-CAD) , 2019, pp. 1–8.[29] S. L. Xi, H. Jacobson, P. Bose, G.-Y . Wei, and D. Brooks, “Quantifyingsources of error in mcpat and potential impacts on architectural studies,”in2015 IEEE 21st International Symposium on High PerformanceComputer Architecture (HPCA) , 2015, pp. 577–589.[30] Q. Xiao, S. Zheng, B. Wu, P. Xu, X. Qian, and Y . Liang, “Hasco: To-wards agile hardware and software co-design for tensor computation,” inProceedings of the International Symposium on Computer Architecture(ISCA) , 2021.[31] X. Yang, M. Gao, Q. Liu, J. Setter, J. Pu, A. Nayak, S. Bell, K. Cao,H. Ha, P. Raina, C. Kozyrakis, and M. Horowitz, “Interstellar: Usinghalide’s scheduling language to analyze dnn accelerators,” in Pro-ceedings of the International Conference on Architectural Support forProgramming Languages and Operation Systems (ASPLOS) , 2020.[32] A. Yazdanbakhsh, C. Angermueller, B. Akin, Y . Zhou, A. Jones,M. Hashemi, K. Swersky, S. Chatterjee, R. Narayanaswami, andJ. Laudon, “Apollo: Transferable architecture exploration,” arXivpreprint arXiv:2102.01723 , 2021.[33] D. Zhang, S. Huda, E. Songhori, K. Prabhu, Q. Le, A. Goldie,and A. Mirhoseini, “A full-stack search technique for domainoptimized deep learning accelerators,” in Proceedings of the 27th ACMInternational Conference on Architectural Support for ProgrammingLanguages and Operating Systems , ser. ASPLOS ’22. New York, NY ,USA: Association for Computing Machinery, 2022, p. 27–42. [Online].Available: https://doi.org/10.1145/3503222.35077677 |
5NGsSpPGVn | ML for Computer Architecture and Systems (MLArchSys), ISCA, 2023Lightweight ML-based Runtime PrefetcherSelection on Many-core PlatformsErika S. Alcorta∗‡, Neeraja J. Yadwadkar∗†, Andreas Gerstlauer∗∗The University of Texas at Austin. esalcort@utexas.edu,neeraja@austin.utexas.edu,gerstl@utexas.edu‡Ampere Computing.†VMWare Research.Abstract —Modern computer designs support compositeprefetching, where multiple individual prefetcher componentsare used to target different memory access patterns. However,multiple prefetchers competing for resources can drasticallyhurt performance, especially in many-core systems where cacheand other resources are shared and very limited. Prior workhas proposed mitigating this issue by selectively enabling anddisabling prefetcher components during runtime. Traditionalapproaches proposed heuristics that are hard to scale withincreasing core and prefetcher component counts. More recently,deep reinforcement learning was proposed. However, it is tooexpensive to deploy in real-world many-core systems. In thiswork, we propose a new phase-based methodology for traininga lightweight supervised learning model to manage compositeprefetchers at runtime. Our approach improves the performanceof a state-of-the-art many-core system by up to 25% and by 2.7%on average over its default prefetcher configuration.I. I NTRODUCTIONHardware data prefetching can reduce memory latency andsignificantly improve the performance of many applications,provided it accurately and promptly detects their memoryaccess patterns. However, individual prefetchers typically tar-get specific or limited sets of patterns [12]. To address thislimitation, modern processors combine multiple prefetchercomponents, thus covering a wider range of access patternsthan monolithic prefetchers [12]. Increasing the number ofprefetches in the system can lead to higher contention andpollution of shared resources like memory bandwidth andcache space [9], [14]. Furthermore, in multi-core systems, en-abling prefetching can sometimes hurt performance dependingon the workload [10]. Consequently, modern processors offerusers the ability to adjust prefetcher components through regis-ters [7], [14], but selecting when to enable or disable prefetchercomponents for any program application is a challenging task.The variety of dynamic workload behaviors in programapplications is very large, and the best prefetcher selectionmay change depending on the workload behavior. For exam-ple, Fig. 1 shows the execution time of 10 programs fromthe SPEC CPU Int Rate 2017 multi-programmed benchmarksuite [18] running on a many-core hardware platform withthree different prefetcher configurations: ON,OFF , and Def.ON enables all prefetcher components; OFF disables allprefetcher components; and, Defsets the default configuration,which enables one prefetcher. The figure depicts that the bestselection is different for each program. While this exampleonly compares three configurations, modern systems offer500 502 505 520 523 525 531 541 548 557SPEC CPU workload ID0.00.20.40.60.81.01.2SpeedupDef OFF ONFig. 1. Speedup (higher is better) of SPEC CPU Int Rate 2017 benchmarkswith all prefetchers disabled (OFF) and all prefetchers enabled (ON) comparedto the default config (Def). The best configuration depends on the workload.more options, increasing the complexity of runtime decisionsthat map workload behaviors to prefetcher selection.Previous research has explored various techniques for tuningprefetcher components at runtime to maximize performance, atask commonly known as runtime adaptive prefetching. Somestudies use heuristics and explore all or a subset of configura-tions during program execution to make decisions [11], [14],[15]. However, exploring configurations during runtime missesperformance opportunities and does not scale with increasingconfigurations and core counts. More recent works have usedmachine learning (ML) models to train a policy offline andevaluate it online [7], [9]. However, they do not providesufficient proof that their models are generalized enough tohandle unseen workloads, and their proposed models are tooexpensive to implement on a real-world platform. Furthermore,none of these runtime adaptive prefetching studies have inves-tigated many-core platforms, which present unique challengesthat do not manifest at lower core counts.In this work, we propose a runtime prefetcher selectionapproach that uses a low-overhead machine learning modelto enable or disable prefetcher components based on their ex-pected performance improvement on a state-of-the-art many-core platform. We collect hardware counter data to monitor thesystem workload and propose a new methodology that usesphase classification [1] and supervised learning to correlateworkload phases with the best selection of prefetcher com-ponents. We demonstrate the effectiveness of our approachby deploying a software-based version on a state-of-the-artcloud-scale hardware platform. Our approach can also beimplemented in hardware on future processor designs.We summarize the contributions of this paper as follows:11) We propose phase classification to group similar work-load behaviors and find the best prefetcher selection foreach phase using a supervised learning formulation.2) We implement a decision tree model that is lightweight,requiring only 42 bytes of storage, yet accurate enoughto improve the execution time of cloud workloads run-ning on a 160-core AmpereOne, a state-of-the-art many-core platform.3) We demonstrate our model’s ability to generalize andimprove the performance of workloads that were notseen during training. Our evaluation includes datacollected from diverse multi-programmed and multi-threaded workloads. Our results show that our modelcan improve the performance of new workloads by upto 25% over the platform’s default prefetcher and by2.7% on average.II. R ELATED WORKPrior work has proposed numerous approaches to reducethe contention generated by prefetchers in multi-core sys-tems. Some work is concerned with extending the design ofprefetchers [3]–[6], [13], [16], [19] while others have proposedprefetcher-aware cache insertion and eviction policies to man-age cache contention [8], [19]. While these solutions focuson tuning an individual prefetcher, our approach is concernedwith managing the components of composite prefetchers.Various studies in composite prefetching management pro-pose heuristics to select prefetchers at runtime [11], [14], [15].These approaches study different metrics to rank prefetcherconfigurations based on performance [11], [15] or other heuris-tics [14]. The ranking is obtained during execution time byperforming an exhaustive search that executes every prefetcherconfiguration for one sample. The best-ranked configurationis selected for a pre-determined period of time. This pro-cess is repeated after either a fixed time window [14] ora phase change, defined by a fixed percentage change insystem performance [11] or annotated in code [15]. However,exhaustively searching multiple configurations during runtimeis not scalable as the number of prefetchers and applicationsincreases. Additionally, the time spent searching necessarilymisses optimization opportunities. Lastly, ranking prefetcherconfigurations based on the performance of a single sam-ple fails to acknowledge short-term performance variationsin workloads [1], which may lead to selecting the wrongconfiguration.More recent work has introduced ML-based compositeprefetcher management approaches. These models eliminatethe need to search the configuration space exhaustively bylearning to generalize from fewer samples. In [7], the authorsproposed formulating the problem with contextual bandits.They train one model per prefetcher component while otherprefetchers are always on. However, they do not evaluatethe coordination of prefetchers, since the models are notenabled simultaneously. Additionally, they found that theynever need to disable some prefetchers in their quad-coresystem. This is not the case in many-core systems, where it isL2 Prefetcher RegisteronoffHardware countersinstructionsbranch misses...SystemMSRPMUWorkloadsModelFig. 2. Prefetcher selection overview.sometimes beneficial to disable all prefetchers, as was shownin Fig. 1. In [9], the authors propose using deep reinforcementlearning (RL) to coordinate multiple prefetchers. However,deploying deep RL models on real-world systems is veryexpensive in terms of training, power, storage, and latencycosts. In contrast, we propose a supervised learning modelwith minimal costs that can be either implemented in existingruntime management systems or easily deployed in hardware.Moreover, these studies [7], [9] only considered multi-programmed workloads and did not investigate whether theirmodels can improve the performance of unseen (i.e., not usedfor training) program applications. Our work demonstrates thatour proposed lightweight runtime prefetcher selection modelcan generalize its predictions to unseen and multi-threadedworkloads.III. P REFETCHER SELECTION MODEL DESIGNThe task of selecting a prefetcher configuration duringruntime with a model is represented in Fig. 2. The model aimsto map a vector of hardware counter values into a prefetcherselection decision. We collect hardware counters by accessingthe performance monitoring units (PMU) of the system andset the prefetcher decision through a model-specific register(MSR). This section outlines our proposed method for de-signing and training such a model. We start by introducingthe problem formulation, followed by an explanation of ourapproach, which involves both offline analysis and onlineimplementation.A. Problem FormulationThe goal of a prefetcher selection policy is to minimize theexecution time of a workload, which we define as G. Theexecution of a workload is represented by a trace of hardwarecounters, U∈RT×C, where Tis the number of samples and Cis the number of collected hardware counters. An observationofUat time tis represented as Ut. The hardware countersare transformed into features Xt= Ω( Ut), Xt∈RM, whereMis the number of features. For example, this transformationΩincludes calculating the IPC with the instructions andcpu-cycles hardware counters. We use ρtto represent the IPC of asample at time t,ρt∈Xt. We partition the goal of minimizingthe execution time into smaller goals that maximize the IPCof each sample, ρt, based on the observation that the averageIPC is inversely proportional to the execution time.2Data collectionWorkloadsSystemPMUMSROFFNext prefetcher config.HW counter tracesPMUPMUPMUHW counter tracesClusteringCluster centersfeaturesphasesPhase ClassificationIPCPref. config.phasesData Set Generation<X,y>Model TrainingOFFOtherpref. config.12345Fig. 3. Proposed analysis to generate our runtime prefetcher selection model.TABLE ILISTS OF COLLECTED HARDWARE COUNTERS AND FEATURES .Hardware counters (U) Features (X= Ω(U))Instructions Instructions per cycle (IPC)Memory accesses Memory accesses per kilo instructionsBranch misses Branch misses per kilo instructionsCache misses Cache misses per kilo instructionsCPU cycles Cache misses to memory accesses ratioL2 data cache refills L2 data cache refills to cache miss ratioL2 instruction cache refills L2 instruction cache refills to branchmisses ratioAt each time step t, a machine learning model, f, predictsan output, yt+1based on the features Xtwith the goal ofmaximizing ρt+1. The output is a one-hot encoded vector,yt∈ {0,1}N, where Nis the number of prefetchers, andeach element in the vector indicates whether the prefetchershould be enabled or disabled.B. Data Analysis and Model TrainingAfter partitioning our goal of minimizing a workload’sexecution time into smaller goals that maximize the IPC ofeach sample of the workload, we need to define a groundtruth in order to train a supervised learning model. We proposea method that analyzes data and generates labels to train aruntime prefetcher selection model in an offline fashion. Ourmethod is depicted in Fig. 3, comprising five stages detailedbelow.1) Data Collection: We periodically collected hardwarecounter data from different workloads to later train our model.For each workload, we collected one trace of hardware coun-ters per prefetcher configuration.2) Clustering: In order to compare the samples of differentprefetcher configurations, we propose clustering similar PMUbehaviors together to find phases within the workloads. Ourmethodology involves training a clustering model with datafrom only one prefetcher configuration. We chose OFF asour baseline since it shows workload behaviors without theeffects of prefetching. We scaled all features to a rangebetween 0 and 1 using a min-max scaler and clustered all theworkload traces of the baseline configuration using k-means .This produces a table of cluster centers, which is then usedfor phase classification.3) Phase Classification: Once the cluster centers have beengenerated using data from the baseline configuration, we usethem to classify the phases of data samples in all traces.Next, we group all samples in the same phase and prefetcherconfiguration and calculate the average IPC per phase. Thisallows us to compare the performance of different prefetcherconfigurations across workload phases.4) Training Set Generation: We use the phase classificationlabels to determine the best prefetcher configuration for eachsample, which we define as the configuration that yieldsthe highest average IPC for the corresponding phase. Weconsider this definition as our ground truth. Associating eachsample and its phase classification with the best prefetcherconfiguration generates a supervised training set that assignseach sample’s features Xtto the ground truth prefetcherselection, yt.5) Model Training: We use our generated data set to traina decision tree model. We found that it only needs four inputfeatures instead of seven while maintaining high predictionaccuracy. This reduces the number of hardware counters thatwe need to collect during runtime.C. Runtime ImplementationWe implemented our prefetcher selection model as a pro-gram with a thread that is invoked every 100 ms. The threadaccesses hardware counter values using perf’s system call.Then, it transforms the counters into features and performsinference on the decision tree. Finally, it writes the decisiontree output to the corresponding fields in the prefetcher MSR.IV. E XPERIMENTAL RESULTSWe collected data from one multi-programmed benchmarksuite, SPEC CPU Int Rate 2017 [18], and two multi-threadedJava benchmark suites, DaCapo [2] and Renaissance [17],to evaluate our approach. We use SPEC CPU workloadsfor training and validation and DaCapo and Renaissance fortesting. All workloads run on AmpereOne, a cloud-scale many-core platform with 160 ARMv8.6+ ISA cores, 2MB of L2cache per core, 64MB of system-level cache, and 256GB ofDDR5-4800 memory running Fedora Linux 36. The platformhas 12 different prefetcher configurations, which can be tunedwith a hardware register. For each prefetcher configuration,we collected one trace of hardware counters per workload,resulting in a total of 120 traces (12 prefetcher options ×10workloads). Each trace consisted of C= 7 hardware counters3500 502 505 520 523 525 531 541 548 557GEOMEAN1.0%0.0%1.0%2.0%3.0%4.0%5.0%6.0%Performanceimprovement1 2 4Fig. 4. Performance improvement (execution time reduction) of different deci-sion tree model depths over the default prefetcher on SPEC CPU benchmarks.500 502 505 520 523 525 531 541 548 557GEOMEAN1.0%0.0%1.0%2.0%3.0%4.0%5.0%6.0%Performanceimprovement1 2 4Fig. 5. Performance improvement (execution time reduction) of different deci-sion tree model depths over the default prefetcher on SPEC CPU benchmarkscompiled with gcc-12.collected periodically every 100 ms with Linux’s perf tool.The hardware counters were transformed into M= 7features.See Table I for the lists of hardware counters and features.Fig. 4 shows the speedup of all SPEC CPU benchmarkswhen prefetcher selection is enabled and exploring the deci-sion tree depth hyperparameter with depths of 1, 2, and 4. Theresults are normalized to the system’s default prefetcher. Thegeomean is shown on the right side of the plot. On average,enabling system-wide runtime adaptive prefetching improvesthe performance of SPEC workloads by 1.9% and up to 5.5%in the best scenario.We want to measure the ability of the model to improve per-formance even with system changes. For this test, we evaluatedour models on the same programs but with different binaries.Specifically, we recompiled SPEC CPU benchmarks with adifferent compiler, gcc-12, which introduces several new codeoptimizations when compared to previous versions, such asimproved vectorization and structure splitting. So although thesame work is completed, the data access patterns may varywidely as in the case of 505.mcf. Then, we enabled prefetcherselection with the same models that were previously trained onSPEC programs compiled with gcc-10. The results are shownin Fig. 5. The best-performing decision tree has a depth of 4.We observe a similar performance improvement trend betweenthe gcc-10 and gcc-12 experiments and demonstrate that themodel still improves performance even when presented withdifferent binary files.We further test the performance of our model by presentingit with completely new workloads (not used for training).We ran workloads from the DaCapo and Renaissance suites.Note that in addition to being new workloads, they are multi-threaded instead of multi-programmed, written in a differentlanguage (Java), and compared to SPEC CPU they spendmore time in operating system code, network stack, andavrorabatik-lgbiojava-lgcassandraeclipsefopgraphchih2h2o-aijme-lgkafkaluindexlusearchpmdsunflowtomcattradebeanstradesoapxalanjzxingGEOMEAN8.0%6.0%4.0%2.0%0.0%2.0%4.0%6.0%8.0%PerformanceimprovementFig. 6. Performance improvement (execution time reduction) of a runtimeprefetcher selection tree of depth 4 trained on SPEC CPU over the defaultprefetcher on DaCapo benchmarks.akka-uctalschi-squaredb-shootoutdec-treedottyfinagle-httpfj-kmeansfuture-geneticgauss-mixlog-regressionmnemonicsmovie-lensnaive-bayesneo4j-analyticspage-rankpar-mnemonicsphilosophersreactorsrx-scrabblescala-dokuscala-kmeansscala-stm-bench7scrabbleGEOMEAN5.0%0.0%5.0%10.0%15.0%20.0%25.0%30.0%PerformanceimprovementFig. 7. Performance improvement (execution time reduction) of a runtimeprefetcher selection tree of depth 4 trained on SPEC CPU over the defaultprefetcher on Renaissance benchmarks.synchronization (locking and snooping). We tested our best-performing decision tree with depth 4 on each suite and showour results in Fig. 6 and Fig. 7. For most of the workloads,dynamic prefetcher selection reduces the execution time, withthe best scenario being 25%. However, as opposed to SPECCPU results, some programs lose performance. Nonetheless,the geomean performance improvements for DaCapo and Re-naissance suites are 1.7% and 3%, respectively. The improvedperformance of all these unseen workloads together is 2.7%.A major benefit of our proposed model, as opposed to priorwork, is the lightweight implementation. The decision tree hasa maximum depth of 4. It requires storing 15 nodes with twoparameters each: the feature ID (in our case, 2 bits for fourfeatures) and the compare value (we use 16 bits but can bereduced to 8 bits). Additionally, the eight leaf nodes requirestoring the prefetcher selection when true or false (4 bits inour case). The total size of our model is only 42 bytes, whichmakes it easy to fit on any embedded firmware or hardwaredeployment.V. C ONCLUSION AND FUTURE WORKWe proposed a lightweight model for runtime prefetcherselection for many-core platforms. It can improve the perfor-mance of unseen workloads by up to 25% and 2.7% on averageover the default prefetcher.These early results suggest that runtime prefetcher selectioncan be formulated as a workload-agnostic offline supervisedlearning problem; however, further investigation is required to4determine why it performed poorly in a few benchmarks. Theinvestigation should determine whether the problem is trainingcoverage, i.e., the input features are in a different distributionfrom the training set, or the problem is workload specific,i.e., for the same set of input features, the best prefetcherselection is different depending on the running program. Ourproposed approach estimates the best prefetcher selection forall the cores in the system. Future work includes investigatinglightweight runtime prefetcher selection that is more practicalfor per-core decisions.ACKNOWLEDGEMENTSWe thank Mahesh Madhav and Scott Tetrick who played avital role in the success of this research project. This workwas supported in part by Ampere Computing and NSF grantCCF-1763848.REFERENCES[1] E. S. Alcorta Lozano and A. Gerstlauer, “Learning-based Phase-awareMulti-core CPU Workload Forecasting,” ACM Transactions on DesignAutomation of Electronic Systems , vol. 28, no. 2, pp. 23:1–23:27, Dec.2022. [Online]. Available: https://doi.org/10.1145/3564929[2] S. M. Blackburn, R. Garner, C. Hoffman, A. M. Khan, K. S. McKinley,R. Bentzur, A. Diwan, D. Feinberg, D. Frampton, S. Z. Guyer, M. Hirzel,A. Hosking, M. Jump, H. Lee, J. E. B. Moss, A. Phansalkar, D. Ste-fanovi ́c, T. VanDrunen, D. von Dincklage, and B. Wiedermann, “TheDaCapo benchmarks: Java benchmarking development and analysis,”inProceedings of the ACM SIGPLAN conference on Object-OrientedPrograming, Systems, Languages, and Applications , Oct. 2006, pp. 169–190.[3] D. Deb, J. Jose, and M. Palesi, “COPE: Reducing Cache Pollutionand Network Contention by Inter-tile Coordinated Prefetching inNoC-based MPSoCs,” ACM Transactions on Design Automation ofElectronic Systems , vol. 26, no. 3, pp. 17:1–17:31, Dec. 2021. [Online].Available: https://doi.org/10.1145/3428149[4] E. Ebrahimi, O. Mutlu, C. J. Lee, and Y . N. Patt, “Coordinated control ofmultiple prefetchers in multi-core systems,” in IEEE/ACM InternationalSymposium on Microarchitecture , Dec. 2009, pp. 316–326.[5] M. Hashemi, K. Swersky, J. Smith, G. Ayers, H. Litz, J. Chang,C. Kozyrakis, and P. Ranganathan, “Learning memory accesspatterns,” in Proceedings of the International Conference on MachineLearning , vol. 80, Jul 2018, pp. 1919–1928. [Online]. Available:https://proceedings.mlr.press/v80/hashemi18a.html[6] W. Heirman, K. D. Bois, Y . Vandriessche, S. Eyerman, and I. Hur,“Near-side prefetch throttling: adaptive prefetching for high-performancemany-core processors,” in Proceedings of the International Conferenceon Parallel Architectures and Compilation Techniques , Nov. 2018, pp.1–11. [Online]. Available: https://doi.org/10.1145/3243176.3243181[7] J. Hiebel, L. E. Brown, and Z. Wang, “Machine Learning forFine-Grained Hardware Prefetcher Control,” in Proceedings of theInternational Conference on Parallel Processing , Aug. 2019, pp. 1–9.[Online]. Available: https://doi.org/10.1145/3337821.3337854[8] A. Jain and C. Lin, “Rethinking Belady’s Algorithm to AccommodatePrefetching,” in ACM/IEEE International Symposium on ComputerArchitecture , Jun. 2018, pp. 110–123.[9] M. Jalili and M. Erez, “Managing Prefetchers With Deep ReinforcementLearning,” IEEE Computer Architecture Letters , vol. 21, no. 2, pp. 105–108, Jul. 2022.[10] H. Kang and J. L. Wong, “To hardware prefetch or not to prefetch?a virtualized environment study and core binding approach,” inProceedings of the International Conference on Architectural supportfor programming languages and operating systems , Mar. 2013, pp.357–368. [Online]. Available: https://doi.org/10.1145/2451116.2451155[11] M. Khan, M. A. Laurenzanoy, J. Marsy, E. Hagersten, and D. Black-Schaffer, “AREP: Adaptive Resource Efficient Prefetching for Maxi-mizing Multicore Performance,” in International Conference on ParallelArchitecture and Compilation , Oct. 2015, pp. 367–378.[12] S. Kondguli and M. Huang, “Division of Labor: A More EffectiveApproach to Prefetching,” in ACM/IEEE International Symposium onComputer Architecture , Jun. 2018, pp. 83–95.[13] N. C. Nachiappan, A. K. Mishra, M. Kandemir, A. Sivasubramaniam,O. Mutlu, and C. R. Das, “Application-aware prefetch prioritization inon-chip networks,” in International Conference on Parallel Architecturesand Compilation Techniques , Sep. 2012, pp. 441–442.[14] C. Navarro, J. Feliu, S. Petit, M. E. G ́omez, and J. Sahuquillo,“Bandwidth-Aware Dynamic Prefetch Configuration for IBMPOWER8,” IEEE Transactions on Parallel and Distributed Systems ,vol. 31, no. 8, pp. 1970–1982, Aug. 2020.[15] C. Ortega, L. Alvarez, M. Casas, R. Bertran, A. Buyuktosunoglu,A. E. Eichenberger, P. Bose, and M. Moret ́o, “Intelligent Adaptation ofHardware Knobs for Improving Performance and Power Consumption,”IEEE Transactions on Computers , vol. 70, no. 1, pp. 1–16, Jan. 2021.[16] B. Panda, “SPAC: A Synergistic Prefetcher Aggressiveness Controllerfor Multi-Core Systems,” IEEE Transactions on Computers , vol. 65,no. 12, pp. 3740–3753, Dec. 2016.[17] A. Prokopec, A. Ros `a, D. Leopoldseder, G. Duboscq, P. T ̊uma,M. Studener, L. Bulej, Y . Zheng, A. Villaz ́on, D. Simon, T. W ̈urthinger,and W. Binder, “Renaissance: benchmarking suite for parallelapplications on the JVM,” in Proceedings of the ACM SIGPLANConference on Programming Language Design and Implementation ,Jun. 2019, pp. 31–47. [Online]. Available: https://doi.org/10.1145/3314221.3314637[18] “SPEC CPU®2017,” https://www.spec.org/cpu2017/index.html.[19] S. Srinath, O. Mutlu, H. Kim, and Y . N. Patt, “Feedback DirectedPrefetching: Improving the Performance and Bandwidth-Efficiency ofHardware Prefetchers,” in International Symposium on High Perfor-mance Computer Architecture , Feb. 2007, pp. 63–74.5 |
LkXKbvcK_c | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionPre-training Segmentation Models for HistopathologyPayden McBee1pm2kb@virginia.edu1Department of Systems and Information Engineering, University of Virginia, Charlottesville, VANazanin Moradinasab1nm4wu@virginia.eduSana Syed2ss8xj@virginia.edu2Department of Pediatrics, University of Virginia, Charlottesville, VADonald E. Brown1deb@virginia.eduEditors: Under Review for MIDL 2023AbstractIn limited data settings, transfer learning has proven useful in initializing model param-eters. In this work, we compare random initialization, pre-training on ImageNet, andpre-training on histopathology datasets for 2 model architectures across 4 segmentationhistopathology datasets. We show that pre-training on histopathology datasets does notalways significantly improve performance relative to ImageNet pre-trained weights for bothmodel architectures. We conclude that unless larger labeled datasets or semi-supervisedtechniques are leveraged, ImageNet pre-trained weights should be used in initializing seg-mentation models for histopathology.Keywords: Transfer learning, histopathology, segmentation.1. Introduction and Related WorksTransfer learning is a technique where a model developed for a specific task can be reused asthe initial model for a second task with limited labeled data. A common transfer learningapproach for medical images is to start with the standard network architectures, e.g., VGG(Simonyan and Zisserman, 2014) and ResNet (He et al., 2016) pre-trained on the large-scale natural images such as ImageNet (Deng et al., 2009) and PASCAL VOC (Everinghamet al., 2010), and then fine-tune them on medical images. The effectiveness of pre-traineddeep convolutional neural networks (CNNs) with sufficient fine-tuning was investigated onfour different medical imaging applications in Tajbakhsh et al. (2016). This study demon-strated that, in most cases, fine-tuning a pre-trained model achieved better performanceand robustness in comparison to those trained from scratch. Similarly, Devan et al. (2019)demonstrated that transfer learning with ImageNet can significantly enhance model per-formance in detecting herpesvirus capsids in microscopy images, particularly when labeleddata is limited. Conze et al. (2020) utilized a VGG-11 encoder pre-trained on ImageNet forthe shoulder muscle MRI segmentation task. These results indicate that a CNN pre-trainedon ImageNet learns features that are applicable to both natural and medical images. How-ever, the gap in features between medical and natural images has motivated pre-trainingon medical datasets. Ray et al. (Ray et al., 2022) demonstrated an increase in performanceand faster convergence for CNNs pre-trained on histopathology datasets relative to a modelpre-trained on ImageNet. To the best of the authors’ knowledge, the efficacy of utilizing amodel pre-trained on natural images compared to a medical image pre-trained models for©2023 P. McBee, N. Moradinasab, S. Syed & D.E. Brown.McBee Moradinasab Syed Brownnuclei segmentation tasks has not been investigated. Our contribution is to systematicallycompare segmentation performance using different pre-trained weights across 4 datasetsderived from whole-slide-images: eosinophilic esophagitis (EoE) with 2,056 images, Crohn’sdisease (Crohns) with 800 images, Colorectal Nuclear Segmentation and the Pheno-types(CoNSeP) (Graham et al., 2019) with 660 images, and PanCancer Histology Dataset for Nu-clei Instance Segmentation and Classification (PanNuke) (Gamper et al., 2019) with 7,901images. We seek to answer the following: Does pre-training on histopathology datasetsimprove segmentation performance relative to encoders pre-trained on ImageNet?2. MethodsFor our analysis, we use HoVer-Net (Graham et al., 2019) and U-Net++ (Zhou et al., 2018)models. HoVer-Net has three separate task-specific decoders, which are used for nucleidetection, separation, and classification, respectively. U-Net++ has a single decoder toprovide pixel classification. The Preact-ResNet50 is utilized as an encoder for both models.For HoVer-Net, we follow the exact hyper-parameters and training strategies presentedin (Graham et al., 2019). For U-Net++, we train each model for 200 epochs and selectthe model that minimizes the validation binary cross-entropy (EoE and Crohns) or cross-entropy (CoNSeP and PanNuke) loss. We train and test each of the models for each ofthe 4 datasets given encoders with various pre-trained weights. We use the MoNuSACResNet50 encoder weights from (Graham et al., 2019), and the other weights are obtainedby initializing a model with ImageNet and training it on a given histopathology dataset.3. Experiments and ResultsTable 1 shows the average performance of the U-Net++ and HoverNet models over 3 runsacross the various pre-trained weights for EoE, Crohns, PanNuke, and CoNSeP. We put themaximum performance for each test set and model across the pre-trained weights in bold andput a star if optimal performance is statistically significant. Notably, the models pre-trainedon histopathology and the models pre-trained on ImageNet do not have differences thatare statistically significant, except for HoVer-Net pre-trained on MoNuSAC for PanNuke,where p= 0.052499 from a Welch’s t-test comparing it with the ImageNet pre-trainedmodel performance. This indicates that pre-training on these histopathology datasets doesnot increase the segmentation performance relative to ImageNet weights. The randomlyinitialized weights are lower for all datasets except U-Net++ for EoE, suggesting thatsome kind of pre-training is useful. Furthermore, the number of epochs trained whenusing a model initialized with ImageNet weights is comparable to models pre-trained onhistopathology, being significantly larger only for U-Net++ on PanNuke. Thus, there is noset of consistently optimal pre-trained weights, and the ImageNet weights provide the sameor better performance than weights from a model pre-trained on multiple histopathologydatasets. Also, the time for training for models with ImageNet pre-trained encoder iscomparable to models pre-trained with histopathology.2Pre-training Segmentation Models for HistopathologyTable 1: Model PerformanceTest DatasetPre-Trained Crohns EoEModel Weights Dice Epochs Dice EpochsU-Net++ Random 0.55±0.033 84 0.62±0.009 83U-Net++ ImageNet 0.572 ±0.006 31 0.615±0.02 60U-Net++ MoNuSAC 0.554±0.009 109 0.612±0.015 103U-Net++ CoNSeP 0.565±0.019 30 0.618±0.018 63U-Net++ PanNuke 0.554±0.031 11 0.599±0.001 65U-Net++ EoE 0.557±0.026 21 - -U-Net++ Crohns - - 0.606±0.016 89HoVer-Net Random 0.389±0.192 74 0.572±0.007 80HoVer-Net ImageNet 0.609 ±0.012 90 0.621±0.004 93HoVer-Net MoNuSAC 0.6±0.011 83 0.624 ±0.002 97PanNuke CoNSePU-Net++ Random 0.552±0.014 73 0.65±0.011 129U-Net++ ImageNet 0.571±0.013 127 0.669±0.008 82U-Net++ MoNuSAC 0.54±0.012 69 0.667±0.009 109U-Net++ CoNSeP 0.57±0.02 50 - -U-Net++ PanNuke - - 0.678 ±0.012 62U-Net++ EoE 0.593 ±0.033 114 0.656±0.039 83U-Net++ Crohns 0.577±0.017 92 0.664±0.022 81HoVer-Net Random 0.555±0.007 78 0.403±0.048 67HoVer-Net ImageNet 0.585±0.003 94 0.67±0.007 97HoVer-Net MoNuSAC 0.602 *±0.008 98 0.679 ±0.016 83HoVer-Net CoNSeP 0.589±0.003 95 - -HoVer-Net PanNuke - - 0.644±0.05 784. ConclusionWe showed that training a model with ImageNet pre-trained weights did not have signif-icantly different performance than pre-training on multiple histopathology datasets for 2state-of-the-art medical segmentation models, the U-Net++ and HoVer-Net. This is likelydue in part to the relatively small size of the datasets used in pre-training. Small datasetsdo not allow the model to learn a diversity of features, even when they come from thetarget domain. Furthermore, the number of training epochs to minimize the validationloss did not increase for the models pre-trained with ImageNet relative to those trained onhistopathology. We conclude that, unless an abundant amount of histopathology data isavailable, pre-training on relatively small histopathology datasets is not likely to increaseperformance or decrease training time relative to an ImageNet baseline.AcknowledgmentsThis was supported by the National Center for Advancing Translational Science of the NIHAward UL1TR003015/ KL2TR003016 and NIH NIDDK K23 Award 5K23DK117061-03.3McBee Moradinasab Syed BrownReferencesPierre-Henri Conze, Sylvain Brochard, Val ́ erie Burdin, Frances T Sheehan, and ChristellePons. Healthy versus pathological learning transferability in shoulder muscle mri seg-mentation using deep convolutional encoder-decoders. Computerized Medical Imagingand Graphics , 83:101733, 2020.Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision andpattern recognition , pages 248–255. Ieee, 2009.K Shaga Devan, Paul Walther, Jens von Einem, Timo Ropinski, Hans A Kestler, andClarissa Read. Detection of herpesvirus capsids in transmission electron microscopyimages using transfer learning. Histochemistry and cell biology , 151:101–114, 2019.Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisser-man. The pascal visual object classes (voc) challenge. International journal of computervision , 88:303–338, 2010.Jevgenij Gamper, Navid Alemi Koohbanani, Ksenija Benes, Ali Khuram, and Nasir Ra-jpoot. Pannuke: an open pan-cancer histology dataset for nuclei instance segmentationand classification. In European Congress on Digital Pathology , pages 11–19. Springer,2019.Simon Graham, Quoc Dang Vu, Shan E Ahmed Raza, Ayesha Azam, Yee Wah Tsang,Jin Tae Kwak, and Nasir Rajpoot. Hover-net: Simultaneous segmentation and classi-fication of nuclei in multi-tissue histology images. Medical Image Analysis , 58:101563,2019.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning forimage recognition. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 770–778, 2016.Indranil Ray, Geetank Raipuria, and Nitin Singhal. Rethinking imagenet pre-training forcomputational histopathology. In 2022 44th Annual International Conference of the IEEEEngineering in Medicine Biology Society (EMBC) , pages 3059–3062, 2022. doi: 10.1109/EMBC48229.2022.9871687.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scaleimage recognition. arXiv preprint arXiv:1409.1556 , 2014.Nima Tajbakhsh, Jae Y Shin, Suryakanth R Gurudu, R Todd Hurst, Christopher B Kendall,Michael B Gotway, and Jianming Liang. Convolutional neural networks for medical imageanalysis: Full training or fine tuning? IEEE transactions on medical imaging , 35(5):1299–1312, 2016.Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang.Unet++: A nested u-net architecture for medical image segmentation, 2018. URL https://arxiv.org/abs/1807.10165 .4 |
3dRs49a1Xmt | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionDD-CISENet: Dual-Domain Cross-Iteration Squeeze andExcitation Network for Accelerated MRI ReconstructionXiongchao Chen1,2xiongchao.chen@yale.eduZhigang Peng1zhigang.peng@siemens-healthineers.comGerardo Hermosillo Valadez1gerardo.hermosillovaladez@siemens-healthineers.com1Siemens Healthineers, Malvern, PA 19355, USA2Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USAEditors: Under Review for MIDL 2023AbstractMagnetic resonance imaging (MRI) is widely employed for diagnostic tests in neurology.However, the utility of MRI is largely limited by its long acquisition time. Acquiring fewerk-space data in a sparse manner is a potential solution to reducing the acquisition time, butit can lead to severe aliasing reconstruction artifacts. In this paper, we present a novel Dual-Domain Cross-Iteration Squeeze and Excitation Network (DD-CISENet) for acceleratedsparse MRI reconstruction. The information of k-spaces and MRI images can be iterativelyfused and maintained using the Cross-Iteration Residual connection (CIR) structures. Thisstudy included 720 multi-coil brain MRI cases adopted from the open-source fastMRIDataset (Zbontar et al., 2018). Results showed that the average reconstruction error by DD-CISENet was 2 .28±0.57%, which outperformed existing deep learning methods includingimage-domain prediction (6 .03±1.31%, p <0.001), k-space synthesis (6 .12±1.66%, p <0.001), and dual-domain feature fusion approaches (4 .05±0.88%, p <0.001).Keywords: Deep learning, MRI reconstruction, dual-domain, multi-coil parallel imaging1. IntroductionMagnetic resonance imaging (MRI) is an essential clinical diagnosis tool of neurology. How-ever, the long scanning time of MRI might induce many problems including patient discom-fort, high exam cost, and motion artifacts. One potential approach for accelerated MRIscanning is downsampling k-space measurements. However, the reconstructed images usingthe downsampled k-space data will display severe aliasing artifacts.Deep learning has shown great potentials in the accelerated sparse reconstruction ofMRI. Existing deep learning approaches can be generally classified into three categories.The first category applied the sparsely reconstructed MRI images as input of neural net-works to predict the synthetic fully reconstructed images. The second category utilized thesparse k-space as input of networks to generate the synthetic full-view k-space. The thirdcategory combines the features of k-space and images in a dual-domain manner to restorethe full-view k-space (Eo et al., 2018). However, the cross-iteration features were typicallyignored in previous dual-domain methods. In this study, we present a novel Dual-DomainCross-Iteration Squeeze- Excitation Network (DD-CISENet) for the accelerated sparse re-construction of brain MRI. The incorporated Cross-Iteration Residual (CIR) connectionsenable data fusion across iterations to enhance the reconstruction accuracy.©2023 CC-BY 4.0, X. Chen, Z. Peng & G.H. Valadez.Chen Peng ValadezIFTI-Net1I1FTDC DCK-Net1K1(Sparse K -Space)IFT I-Net2K-Net2+ I2FT+ K2IFT............I-NetN +K-NetNINFT+ KNRSS Rout(Recon Out)(Iterate)Cross -Iteration Residual Connection in image domainCross -Iteration Residual Connection in k -spaceInput connection for data consistencyIFT Inverse Fourier TransformFT Fourier TransformDC Data consistency+Element -wise additionRSS Root Sum-of-Squares... IterationKSI-NetiK-NetiI-Net module in the ith IterationK-Net module in the ith IterationFigure 1: The architecture of DD-CISENet. I-Net orK-Net modules are end-to-end con-nected for dual-domain feature fusion. Cross-Iteration Residual (CIR) connec-tions enable the retention of image or k-space features across iterations.2. MethodsThe diagram of DD-CISENet is presented in Fig. 1. The sparse k-space data KSis first inputtoI-Net 1module after reconstruction using Inverse Fourier Transform (IFT), generating theoutput I1=HI1(F−1(KS)), where HI1refers to a dual Squeeze-Excitation Network (SENet)(Chen et al., 2021) in I-Net 1.F−1refers to the IFT operator.Then, I1was input to the K-Net 1module after forward projection using Fourier Trans-form (FT), generating the output K1. Then, the IFT of K1is input to I-Net 2of the 2nditeration. Meanwhile, I1is also added to I-Net 2using CIR connections to produce theoutput I2=HI2(F−1(K1) +I1). Thus, the image-domain features of the 1stiteration isretained and transmitted to the next iteration to better incorporate the image features.Similarly, K1is added to K-Net 2to retain the k-space features. The output k-space ofthe ith(i≥2) iteration can be formulated as:Ki=D(HKi(D(F(Ii) +Ki−1, KS)), KS), (1)where Dis a data consistency module. HKiis SENet in K-Net i.Fis the FT operator.Then, the predicted KNis reconstructed into the final MRI image Routas the output.0.0800.02-0.02Ground TruthZero -FillingUNetError MapsResNet ResNet -Image DD-ResNet DD-SENet DD-SENet (2 Iter) DD-CISE Net (2 Iter)DifferenceFigure 2: Visualizations of the reconstructed MRI images using predicted k-space. Differ-ence maps are placed at the bottom side for comparison.The overall end-to-end loss function of DD-CISENet is L=L1+···+Li+···+LN,where Li=λIilIi+λKilKirepresents the combined loss of the ithiteration. λis the loss2DD-CISENetweight that was empirically set as 0.5 in our study. lIiorlKiis the L2 loss between theground-truth fully sampled and predicted images or k-spaces.3. Results and ConclusionsFig. 2 shows the qualitative comparison of the reconstructed MRI images by multiple ap-proaches. The proposed DD-CISENet outputs more accurate MRI images than existingimage-domain prediction techniques ( ResNet-Image ), k-space synthesis methods ( UNet ,ResNet ), and dual-domain feature fusion approaches ( DD-ResNet ,DD-SENet ). Table 1lists the quantitative comparison of the generated k-space and reconstructed MRI imagesby multiple approaches using normalized mean square error (NMSE) and structural simi-larity (SSIM). It can be observed that DD-CISENet presents quantitatively more accuratek-space data and reconstructed MRI images than existing methods. Paired t-tests furthervalidated the statistical significance of the quantification results ( p < 0.001). Thus, theproposed DD-CISENet demonstrated state-of-the-art performance in MRI sparse recon-struction, superior to existing image-domain, k-space, and dual-domain methods.Table 1: Quantification of the generated k-space data and reconstructed images.MethodsGenerated K-space Data Reconstructed MRI ImagesNMSE% SSIM NMSE% SSIM P-value†Zero-Filling 19.27±6.05 0 .075±0.001 13.65±4.21 0 .984±0.005 –UNet 10.32±2.25 0 .132±0.027 6.97±1.41 0 .991±0.003 <0.001ResNet 8.06±2.14 0 .151±0.026 6.12±1.66 0 .991±0.002 <0.001ResNet-Image – – 6.03±1.31 0 .990±0.001 0 .202DD-ResNet 7.50±2.25 0 .183±0.036 4.05±0.88 0 .992±0.001 <0.001DD-SENet 7.08±2.61 0 .186±0.021 3.52±0.88 0 .995±0.001 <0.001DD-SENet (2 Iter) 6.85±2.26 0 .207±0.034 2.49±0.66 0 .997±0.001 <0.001DD-CISENet (2 Iter) 6.54±2.56 0 .221±0.034 2.28±0.57 0 .998±0.001 <0.001†P-value of paired t-test on NMSE of Images between the current and previous group.AcknowledgmentsThis work was funded by Siemens Medical Solutions USA, Inc.ReferencesXiongchao Chen, Bo Zhou, Luyao Shi, Hui Liu, Yulei Pang, Rui Wang, Edward J Miller,Albert J Sinusas, and Chi Liu. Ct-free attenuation correction for dedicated cardiacspect using a 3d dual squeeze-and-excitation residual dense network. Journal of NuclearCardiology , pages 1–16, 2021.Taejoon Eo, Yohan Jun, Taeseong Kim, Jinseong Jang, Ho-Joon Lee, and Dosik Hwang.Kiki-net: cross-domain convolutional neural networks for reconstructing undersampledmagnetic resonance images. Magnetic resonance in medicine , 80(5):2188–2201, 2018.Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew JMuckley, Aaron Defazio, Ruben Stern, Patricia Johnson, and Mary Bruno. fastmri: Anopen dataset and benchmarks for accelerated mri. arXiv preprint arXiv:1811.08839 , 2018.3 |
tQvYo-DMrO | Medical Imaging with Deep Learning 1–4, 2023Improving Zero-Shot Detection of Low Prevalence ChestPathologies using Domain Pre-trained Language ModelsAakash Mishra∗1amishra@college.harvard.eduRajat Mittal∗1rajatmittal@college.harvard.eduChristy Jestin1christyjestin@college.harvard.eduKostas Tingos1kostastingos@college.harvard.eduPranav Rajpurkar1pranav rajpurkar@hms.harvard.edu1Harvard UniversityAbstractRecent advances in zero-shot learning have enabled the use of paired image-text datato replace structured labels, replacing the need for expert annotated datasets. Modelssuch as CLIP-based CheXzero utilize these advancements in the domain of chest X-rayinterpretation. We hypothesize that domain pre-trained models such as CXR-BERT,BlueBERT, and ClinicalBERT offer the potential to improve the performance of CLIP-likemodels with specific domain knowledge by replacing BERT weights at the cost of breakingthe original model’s alignment. We evaluate the performance of zero-shot classificationmodels with domain-specific pre-training for detecting low-prevalence pathologies. Eventhough replacing the weights of the original CLIP-BERT degrades model performance oncommonly found pathologies, we show that pre-trained text towers perform exceptionallybetter on low-prevalence diseases. This motivates future ensemble models with a combinationof differently trained language models for maximal performance.Keywords: Zero-Shot, Fine-Tuning, Pre-Training, Multi-Modal, Self-Supervised Learning1. IntroductionContrastive learning methods have helped create deep learning models with robust zero-shotperformance in the field of medical imaging. Innovations in deep learning have enabled theautomation of tasks such as medical image interpretation (Litjens et al., 2017; Rajpurkaret al., 2017). Many of these methods, however, rely on the existence of a large datasetof annotated examples. These annotations often take a significant amount of expert timeto assign, and the resulting approaches are limited to only predicting diseases that areexplicitly annotated (Smit et al., 2020). Recently, the emergence of contrastive learningapproaches for multimodal zero-shot learning have allowed for paired image-text data toreplace structured labels and still achieve competitive performance on many downstreamtasks (Radford et al., 2021). Some models have even achieved expert-level, zero-shot chestX-ray pathology classification performance (Tiu et al., 2022).While many models such as CheXzero are initialized with weights derived from naturalimage-caption training, there have been a number of open-source language models thatspecialize in specific medical domains such as CXR-BERT (Boecking et al., 2022), BlueBERT(Peng et al., 2019), and ClinicalBERT (Alsentzer et al., 2019). In particular, domain pre-trained models see a larger medical corpora than is available in chest X-ray reports, and∗Contributed equally©2023 CC-BY 4.0, A. Mishra, R. Mittal, C. Jestin, K. Tingos & P. Rajpurkar.Mishra Mittal Jestin Tingos Rajpurkaras a result, form richer embeddings for low-prevalence pathologies (infrequently mentionedpathologies in the image-report dataset). The trade-off is that using a pretrained text towerbreaks the alignment between the vision and language modalities. We hypothesize thatthese domain pre-trained models, having seen a larger medical corpus, offer the potential toimprove the performance of CLIP-like models on low-prevalence pathologies. In this work, weevaluate the performance of zero-shot classification models with domain-specific pretrainingand find that they perform especially well for detecting low-prevalence pathologies.2. MethodsWe employ contrastive learning with image–text pairs to achieve zero-shot multi-labelclassification, utilizing two embedding models: a vision embedder and a text embedder. Theembedding models are trained via a contrastive loss and the similarity is assessed. Insteadof initializing both towers with CLIP weights like the baseline model, CheXzero (Tiu et al.,2022), we keep the existing vision tower from the generalized pretrained model and thereplace the CLIP text tower with a domain-specific language model. We employ threelanguage domain pretraining models: CXR-BERT, BlueBERT, and ClinicalBERT, all ofwhich were pretrained on PubMed and MIMIC-III (Peng et al., 2019; Alsentzer et al., 2019;Boecking et al., 2022). CXR-BERT was further pretrained on MIMIC-CXR to specialize inthe chest X-ray domain. Our text stack includes a text projection to complete our pretrainedmodels and produce a final embedding size of 128 since originally the embedding size foreach of the three language models was not standardized.We then trained each model on the contrastive alignment task for 5 epochs using MIMIC-CXR data and then proceeded with a zero-shot evaluation on the chest X-ray pathologyclassification task, using negative (e.g. no pneumonia) and positive prompts (e.g. pneumonia)to calculate similarities between the image and prompts as logits for probability of occurrencewith the VinDr-CXR dataset. We follow the same zero shot inference strategy as Tiu et al.(2022). Furthermore, we trained a single CheXzero baseline model from CLIP weights toprovide a better comparison for our single-model experiments since the published model isan ensemble. All models were trained for 5 epochs with a batch size of 32, a learning rate of5e-6, and a SGD optimizer with a momentum of 0.9.3. ResultsBoth CLIP-vision + domain pre-trained language models (ClinicalBERT and CXRBERT)exceed the AUC performance of the baseline in every category of low-prevalence pathologyfrom the VinDr-CXR dataset which contains labels for many rarely mentioned diseases,including lung tumor, aortic enlargement, enlarged pulmonary artery, and clavicle fracture,all of which occur as strings less than 100 times in the more than 300 thousand reportimpressions in MIMIC-CXR. The opposite trend holds for high-prevalence diseases where thebaseline outperforms the majority of mixed CLIP-vision + domain pretrained text models.As reflected in Table 1, we see that for rare diseases where the number of occurrences is below1% and 100 total mentions, the best model is the CXR-BERT hybrid with an improvementof 0.15 over the CheXzero baseline. The second-best model is ClinicalBERT with an AUCscore improvement of 0.08 on average across the low prevalence disease mentions. BothClinicalBERT and CXR-BERT outperform our baseline in each category. For high-prevalence2Improving Zero-Shot Detection of Low Prevalence Chest PathologiesConfiguration Lung tumor Aortic enlargement Enlarged PA Clavicle fractureOccurrences 1 (≤1%) 6 ( ≤1%) 38 ( ≤1%) 69 ( ≤1%)Baseline 0.687 (0.633, 0.725) 0.580 (0.547, 0.619) 0.713 (0.557, 0.838) 0.664 (0.401, 0.848)BlueBert 0.654 (0.605, 0.699) 0.640 (0.609, 0.670) 0.667 (0.537, 0.805) 0.585 (0.264, 0.809)ClinicalBert 0.713 (0.665, 0.753) 0.596 (0.567, 0.628) 0.757 (0.580, 0.891) 0.762 (0.445, 0.978)CXRBert 0.714 (0.666, 0.751) 0.760 (0.737, 0.782) 0.854 (0.777, 0.935) 0.853 (0.839, 0.872)Configuration Pleural effusion Pneumonia Atelectasis PneumothoraxOccurrences 78644 (18.6%) 65204 (15.7%) 56377 (13.6%) 45994 (11.1%)Baseline 0.864 (0.823, 0.897) 0.849 (0.823, 0.874) 0.770 (0.727, 0.814) 0.811 (0.762, 0.865)BlueBERT 0.857 (0.821, 0.882) 0.540 (0.506, 0.580) 0.732 (0.680, 0.779) 0.689 (0.591, 0.783)ClinicalBERT 0.773 (0.732, 0.805) 0.821 (0.799, 0.845) 0.639 (0.589, 0.699) 0.590 (0.490, 0.687)CXRBERT 0.895 (0.871, 0.918) 0.819 (0.801, 0.841) 0.675 (0.624, 0.714) 0.683 (0.602, 0.782)Table 1: Pretrained text tower zero-shot bootstrapped AUC performance on low-prevalence pathologies in theMIMIC-CXR training set, evaluated on the VinDr-CXR dataset. Both ClinicalBERT and CXRBertoutperform the baseline on the majority of low-prevalence pathologies. The same cannot be said forcommon pathologies. Occurrences are defined as the number of MIMIC-CXR reports with impressionsthat contain the name of the pathology. Format: AUC values and 95% confidence intervalspathologies, we see that the CheXzero baseline tends to outperform the hybrid CXR-BERTAUC by an average of 0.05 and the ClinicalBERT AUC performance by 0.12.4. DiscussionDomain pretrained text towers have extremely strong performance on rarelymentioned diseases. We find that pre-trained text towers perform extremely well acrosscategories with little mention in the reports in the MIMIC-CXR alignment training dataset.This motivates the need for pre-trained text towers depending on the use-case of a zero-shotclassifier. When diseases have little-to-no mention in the alignment training dataset, thebaseline is automatically at a disadvantage. At zero-shot classification time, in the idealscenario, the model will produce a relevant image embedding containing information aboutthe new pathology and the text tower will embed the new pathology to an aligned encoding.However, if the text tower has rarely or never been exposed to the new pathology as isthe case with CheXzero, which was initialized with CLIP weights, the likelihood of thetext embedding being meaningful in reference to the image embedding is small. Meanwhile,pre-trained text towers that have been trained on large datasets like MIMIC III and PubMedabstracts have encountered impressions of these low-prevalence pathologies in their masked-language modeling pre-training task. The surprising result is that even after alignmenttraining where the pathology is rarely mentioned, the text embedding model is still able toperform well on these rarer categories.Our research suggests that a zero-shot classifier designed to classify a broad range ofboth common and low-prevalence chest X-ray pathologies will likely need some form of adomain pretrained text tower. One option is to include pretraining in the text encoder in away that does not hurt performance on common diseases, a task of future research. Theother option is a weighted ensemble that has both a domain pretrained language model anda general BERT model.3Mishra Mittal Jestin Tingos RajpurkarReferencesEmily Alsentzer, John R. Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann,and Matthew B. A. McDermott. Publicly available clinical BERT embeddings. CoRR ,abs/1904.03323, 2019. URL http://arxiv.org/abs/1904.03323 .Benedikt Boecking, Naoto Usuyama, Shruthi Bannur, Daniel C. Castro, Anton Schwaighofer,Stephanie Hyland, Maria Wetscherek, Tristan Naumann, Aditya Nori, Javier Alvarez-Valle,Hoifung Poon, and Ozan Oktay. Making the most of text semantics to improve biomedicalvision–language processing, 2022. URL https://arxiv.org/abs/2204.09817 .Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio,Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak, Bram van Gin-neken, and Clara I. S ́ anchez. A survey on deep learning in medical image analysis.Medical Image Analysis , 42:60–88, 2017. ISSN 1361-8415. doi: https://doi.org/10.1016/j.media.2017.07.005. URL https://www.sciencedirect.com/science/article/pii/S1361841517301135 .Yifan Peng, Shankai Yan, and Zhiyong Lu. Transfer learning in biomedical natural languageprocessing: An evaluation of BERT and elmo on ten benchmarking datasets. CoRR ,abs/1906.05474, 2019. URL http://arxiv.org/abs/1906.05474 .Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, SandhiniAgarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger,and Ilya Sutskever. Learning transferable visual models from natural language supervision.CoRR , abs/2103.00020, 2021. URL https://arxiv.org/abs/2103.00020 .Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan,Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, Matthew P. Lungren, andAndrew Y. Ng. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deeplearning, 2017. URL https://arxiv.org/abs/1711.05225 .Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Y. Ng, and Matthew P.Lungren. Chexbert: Combining automatic labelers and expert annotations for accurateradiology report labeling using BERT. CoRR , abs/2004.09167, 2020. URL https://arxiv.org/abs/2004.09167 .Ekin Tiu, Ellie Talius, Pujan Patel, Curtis Langlotz, Andrew Ng, and Pranav Ra-jpurkar. Expert-level detection of pathologies from unannotated chest x-ray imagesvia self-supervised learning. Nature Biomedical Engineering , pages 1–8, 09 2022. doi:10.1038/s41551-022-00936-9.4 |
_VKOZO2LHF0 | Medical Imaging with Deep Learning – Accepted for publication at MIDL 2023 Short Paper – MIDL 2023Uncertainty-based Quality Controlled T1 Mapping and ECVAnalysis using Bayesian Vision TransformerTewodros Weldebirhan Arega1tewdrosw[at]gmail.com1ImViA Laboratory, Universit ́ e de Bourgogne, Dijon, FranceSt ́ ephanie Bricq1Fran ̧ cois Legrand1Alexis Jacquier22Aix-Marseille Univ, CNRS, CRMBM, 13005 Marseille, FranceAlain Lalande1,33Medical Imaging department, University Hospital of Dijon, Dijon, FranceFabrice Meriaudeau1AbstractCardiac MR segmentation using deep learning has advanced significantly. However, inaccu-rate segmentation results can lead to flawed clinical decisions in downstream tasks. Hence,it is essential to identify failed segmentations through quality control (QC) methods beforeproceeding with further analysis. This study proposes a fully automatic uncertainty-basedQC framework for T1 mapping and extracellular volume (ECV) analysis, consisting of threeparts. Firstly, Bayesian Swin transformer-based U-Net was employed to segment cardiacstructures from a native and post-contrast T1 mapping dataset (n=295). Secondly, ournovel uncertainty-based QC, which utilizes image-level uncertainty features, was used todetermine the quality of the segmentation outputs. It achieved a mean area under the ROCcurve (AUC) of 0.927 on binary classification and a mean absolute error (MAE) of 0.021 onDice score regression. The proposed QC significantly outperformed other state-of-the-artuncertainty-based QC methods, especially in predicting segmentation quality from poor-performing models, highlighting its robustness in detecting failed segmentations. Finally,T1 mapping and ECV values were automatically computed after the inaccurate segmenta-tion results were rejected by the QC method, characterizing myocardial tissues of healthyand cardiac pathological cases. The myocardial T1 and ECV values computed from auto-matic and manual segmentations show an excellent agreement yielding Pearson coefficientsof 0.990 and 0.975, respectively. The study results indicate that these automatically com-puted values can accurately characterize myocardial tissues .Keywords: Cardiac MRI Segmentation, Native T1 mapping, Uncertainty, Quality control1. IntroductionCardiac native T1 mapping and extracellular volume (ECV) can be used to quantify dif-fuse myocardial fibrosis and characterize myocardial tissues in patients with cardiovasculardiseases (CVDs). To analyze these values, regions of interest (ROIs) are typically manuallydrawn on T1 images of the left ventricle’s blood pool, septum, and free wall. However,manual segmentation can be tedious and subject to variability among different observers.0. Source code will be available: https://github.com/tewodrosweldebirhan/uncertainty-quality-control-T1-mapping©2023 CC-BY 4.0, T.W.A. , S.B. , F.L. , A.J. , A.L. & F.M. .Figure 1: Proposed method for automatic quality controlled cardiac MR images segmen-tation. DiceWithinSamples: Dice agreement within MC-Dropout samples, HD-Withinsamples: HD agreement within MC-Dropout samples.Deep learning-based segmentation methods have been proposed to automate this process,but they can produce inaccurate results, leading to incorrect clinical decisions. As a solu-tion, automatic quality control (QC) for segmentation methods have been proposed. Somestudies have tried to predict segmentation quality using hand-crafted features of the imagesand segmentation maps (Zhang et al., 2006; Kohlberger et al., 2012), while others proposeda quality control method that employs a CNN-based QC to determine the quality of thesegmentation output by using the image with its corresponding segmentation and uncer-tainty maps as input to the classifier (Robinson et al., 2018; Chen et al., 2020; Williamset al., 2021). However, directly using the image, segmentation, and uncertainty map doesnot correlate well with the segmentation quality, as it is shown in this paper.2. Method and ExperimentsThe proposed pipeline consists of three parts. The first one involves segmenting left ven-tricular and right ventricular blood pools and left ventricular myocardium from native andpost-contrast T1 mapping images using Bayesian Swin transformer-based U-Net. In thesecond part, to detect the poorly segmented images from the model, we propose an auto-mated QC method that utilizes image-level uncertainty metrics generated by the Bayesianmodel to estimate the quality of the segmentation result. The final part is focused on theautomatic analysis of native myocardial T1 mapping and ECV values of the images thatwere categorized as good quality images by the proposed QC (Arega et al., 2023).A Swin transformer-based U-Net with dropout is trained to segment the heart struc-tures from native and post-contrast T1 mapping CMR images. The model is sampled Ntimes during testing to obtain N Monte-Carlo segmentation samples, as shown in Fig. 1.The uncertainty metrics, such as sample variance and predictive entropy, are derived fromthese Monte-Carlo samples. In addition, image-level metrics like Dice agreement withinMC samples (DiceWithinSamples) and HD agreement within MC samples (HDWithinsam-ples) are computed. DiceWithinSamples is the average Dice score of the mean predictedsegmentation and the individual N MC prediction samples.For the QC method, we proposed a simple uncertainty-based QC which leverages image-level uncertainty metrics such as DiceWithinsamples, HDWithinsamples, mean sample vari-ance, and mean predictive entropy to predict the quality of the segmentation, as shown inFig. 1. These image-level uncertainty features are fed to a random forest (RF) classi-2Automatic Uncertainty-based Quality Controlled T1 Mapping and ECV Analysisfier/regressor to train the model to classify the quality of the segmentation result or todirectly regress the Dice score.The proposed QC method is compared to different state-of-the-art QC methods, whichare based on various inputs, including segmentation map only (Seg QC) (Chen et al.,2020), image-segmentation pair (Image-Seg QC) (Robinson et al., 2018; Huang et al., 2016),segmentation-uncertainty map pair (Seg-Uncert QC) (Williams et al., 2021), whereas Image-Seg-Uncert QC method (Devries and Taylor, 2018; Chen et al., 2020) uses all of the threeinputs together to determine the quality of the segmentation result. As can be seen fromTable 1, the proposed QC method achieved the best results in Dice regression in terms ofMAE and Pearson correlation coefficient (P-CC) compared to the other QC methods. Fromthe results, we showed that by training a classifier using simple inputs that are derived fromuncertainty can determine segmentation quality better than the ones that directly use theimage, segmentation, and uncertainty maps. The proposed method is also computationallymore efficient.Table 1: Dice score regression results of different QC methods in terms of mean absoluteerror (MAE) and Pearson correlation coefficient (P-CC) between the predictedDice and the ground truth Dice. Bold results are the best.Model Dataset Type QC Method MAE P-CCSeg 0.02006 (0.02684) 0.70Seg-Uncert 0.01959 (0.02815) 0.67Image-Seg 0.01980 (0.02726) 0.72Image-Seg-Uncert 0.01953(0.02421) 0.74Native T1Proposed 0.01731 (0.02277) 0.82Seg 0.02854 (0.04890) 0.67Seg-Uncert 0.02790 (0.04729) 0.69Image-Seg 0.03001 (0.04708) 0.67Image-Seg-Uncert 0.02940 (0.05158) 0.64Swin-based U-NetPost-contrast T1Proposed 0.02634 (0.03698) 0.823. ConclusionIn summary, the proposed fully automatic uncertainty-based quality control framework forT1 mapping and ECV analysis has the potential to improve the accuracy and reliability ofcardiac MR segmentation and, subsequently, improve the clinical decision-making process.The robustness of the method in detecting failed segmentations and the excellent agreementbetween automatic and manual segmentations suggest that it can be a valuable tool incharacterizing myocardial tissues of healthy and cardiac pathological cases.4. DisclaimerThis paper is a shortened version of the work published in (Arega et al., 2023).3AcknowledgmentsThis work was supported by the French National Research Agency (ANR), with referenceANR-19-CE45-0001-01-ACCECIT.ReferencesTewodros Weldebirhan Arega, St ́ ephanie Bricq, Fran ̧ cois Legrand, Alexis Jacquier, AlainLalande, and Fabrice M ́ eriaudeau. Automatic uncertainty-based quality controlled t1mapping and ecv analysis from native and post-contrast cardiac t1 mapping images usingbayesian vision transformer. Medical image analysis , 86:102773, 2023.Xinyuan Chen, Kuo Men, Bo Chen, Yu Tang, Tao Zhang, Shulian Wang, Yexiong Li, andJianrong Dai. Cnn-based quality assurance for automatic segmentation of breast cancerin radiotherapy. Frontiers in Oncology , 10, 2020.Terrance Devries and Graham W. Taylor. Leveraging uncertainty estimates for predictingsegmentation quality. ArXiv , abs/1807.00502, 2018.Chao Huang, Q. Wu, and Fanman Meng. Qualitynet: Segmentation quality evaluationwith deep convolutional networks. 2016 Visual Communications and Image Processing(VCIP) , pages 1–4, 2016.Timo Kohlberger, Vivek Kumar Singh, Christopher V. Alvino, Claus Bahlmann, and Leo J.Grady. Evaluating segmentation error without ground truth. Medical image computingand computer-assisted intervention : MICCAI ... International Conference on MedicalImage Computing and Computer-Assisted Intervention , 15 Pt 1:528–36, 2012.Robert Robinson, Ozan Oktay, Wenjia Bai, Vanya V Valindria, Mihir M Sanghvi, Nay Aung,Jos ́ e Miguel Paiva, Filip Zemrak, Kenneth Fung, Elena Lukaschuk, et al. Subject-levelprediction of segmentation failure using real-time convolutional neural nets. 2018.Elena Williams, Sebastian Niehaus, Janis Reinelt, Alberto Merola, Paul Glad Mihai, IngoRoeder, Nico Scherf, and Maria del C Vald ́ es Hern ́ andez. Quality control for more reliableintegration of deep learning-based image segmentation into medical workflows. arXivpreprint arXiv:2112.03277 , 2021.Hui Zhang, Sharath R. Cholleti, Sally A. Goldman, and Jason E. Fritts. Meta-evaluationof image segmentation using machine learning. 2006 IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition (CVPR’06) , 1:1138–1145, 2006.4 |
Z376GMHarUB | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionAutomatic renal perfusion estimation on postoperativePCASL MRI based on deep learning image analysis andsegmentationAnne Oyarzun-Dome ̃ no1,2anne.oyarzun@unavarra.es1Electrical Electronics and Communications Engineering, Public University of Navarre, 31006,Pamplona, Spain.2IdiSNA, Health Research Institute of Navarra, 31008, Spain.Izaskun Cia1izaskun.cia@unavarra.esRebeca Echeverria-Chasco2,3recheverriac@unav.es3Department of Radiology, Cl ́ ınica Universidad de Navarra, 31008, Pamplona, Spain.Mar ́ ıa A. Fern ́ andez-Seara2,3mfseara@unav.esPaloma L. Martin-Moreno2,4plmartin@unav.es4Department of Nephrology, Cl ́ ınica Universidad de Navarra, 31008, Pamplona, Spain.Nuria Garcia-Fernandez2,4nrgarcia@unav.esGorka Bastarrika2,3bastarrika@unav.esJavier Navallas1,2javier.navallas@unavarra.esArantxa Villanueva1,2,5avilla@unavarra.es5Institute of Smart Cities (ISC), Health Research Institute of Navarra, 31006, Pamplona, Spain.Editors: Under Review for MIDL 2023AbstractNon-invasive arterial spin labeling is a magnetic resonance imaging technique that canbe used for kidney transplant evaluation and perfusion estimation. This work proposesan automatic workflow for renal segmentation and perfusion estimation based on a deeplearning approach and image analysis, for the postoperative evaluation of the allograft.Our method outperforms state-of-the-art results in terms of multiclass segmentation onlow spatial resolution and low signal-to-noise-ratio data. Similarity coefficients above 90%are achieved for kidney, cortex, and medulla segmentation results and perfusion valueswithin the acceptable ranges are obtained.Keywords: Renal perfusion, PCASL MRI, segmentation.1. IntroductionRenal transplant is the treatment of choice in patients suffering from chronic kidney disease,characterized by a progressive and irreversible loss of kidney function (Jiang and Lerman,2019). Renal blood flow (RBF) has a great value for clinicians as it enables the identificationof perfusion impairment as an emerging biomarker of transplanted renal dysfunction. Theoverall RBF is determined by the vasoconstriction of renal arterial tree and changes in theintrarenal vascular resistance (Field et al., 2010). Pseudo-continuous arterial spin labeling(PCASL) is a non-invasive magnetic resonance imaging (MRI) technique that allows thecharacterization of RBF using magnetically labeled arterial blood water spins as endogenoustracer with a combination of a train of radiofrequency pulses and slice-selective gradients©2023 CC-BY 4.0, A. Oyarzun-Dome ̃ no et al.Oyarzun-Dome ̃no et al.(Nery et al., 2018). It is considered an appropriate imaging technique for patients withrenal dysfunction for whom the administration of contrast agents could be contraindicated(Odudu et al., 2018). RBF estimation derived from PCASL entails extra segmentation work,which is tedious and prone to error. To date, the applications of machine and deep learningmodels in renal MRI are scarce compared to those found for computerized tomography (CT)(Zhang et al., 2020) and high spatial resolution MRI (Klepaczko et al., 2021). We proposea fully-automated pipeline that; first performs a preliminary whole kidney segmentationby Mask R-CNN; secondly classifies the pixels within the kidney region into cortex andmedulla classes; and finally estimates RBF for quantitative assessment (see Figure 1).2. Experimental setup and resultsFigure 1: Automatic renal perfusion estimation pipeline.The dataset used consists of PCASL and T1-weighted (w) images from 16 transplantedpatients with acquisition matrix of 96 x 96, and 3 slices. Each dataset contains a M0reference image, 25 control and labels PCASL pairs, and 14 T1-w images. Binary masksencompassing the kidney, the cortex, and the medulla are used in the training and testingsteps. Data augmentation and intensity normalization is used in the training process.Segmentation of the kidney and renal compartmentsWe implement the Mask R-CNN for the segmentation of the kidney on PCASL images. Itconsists of a two stage (Feature Pyramid Network (FPN) and a ResNet101 backbone) con-volutional neural network (CNN) that generates bounding boxes and segmentation masksfor each instance of an object in the image (He et al., 2018). The model is trained for150 epochs, using supervised gradient descent optimizer, learning rate of 10−4, and pre-trained weights for MS COCO (Abdulla, 2017). We use Python 3.8 and Tensorflow on GPUNVIDIA GeForce RTX 3090. Training takes ≈120 min. We compare the performance ofthe Mask R-CNN against the U-Net (Ronneberger et al., 2015) and Supervised DescentMethod (Xiong and De la Torre, 2013). The method proposed achieves a Dice similaritycoefficient (DSC) score (mean ±standard deviation, SD) of 93.90 ±2.00%, whereas the U-Net and SDM achieve DSC values of 87.87 ±1.30% and 84.40 ±11.89%, respectively. Themulticlass segmentation method proposed is based on temporal information thresholding ofT1-w images. Due to the lack of labeled cortex and medulla tissues to train the network,we use simple image processing tools for tissue differentiation. Based on ground truth (GT)cortex and medulla annotations, we construct time-intensity curves for each tissue along2Automatic renal perfusion estimation using DL techniquesthe inversion times (TI) and analyse the temporal distribution of the null points for eachpixel. We note that cortical tissue attains its null point before the medulla does. Pixelswithin kidney masks resulted from Mask R-CNN are classified as cortex if its null point isfound at 5 ≤k≤8 TIs and as medulla if found at 10 ≤k≤13. Unclassified pixels aredesignated the uncertain class. In a second stage, pixels are reclassified according to GT T1values. Instance segmentation is evaluated using a set of standard metrics: DSC, precision(PC), recall (RC), and F-measure (FM) with βvalue of 2. In order to counteract the classimbalance between cortex and medulla, metrics are weighted according to the number ofpixels for each class. Thus, attained RC is 89.66 ±9.99%, PC is 91.85 ±4.89%, FM is89.47 ±10.61% and DSC is 89.70 ±10.23%.RBF estimationMean cortical and medullary signals are calculated over respective tissues of subtractedcontrol and label pairs. RBF maps are computed using the single compartment model(Nery et al., 2020). Pairwise comparisons between predicted and GT perfusion values showpositive association, as the GT perfusion values increases, so does the predicted values.Obtained cortical perfusion values for proposed and GT values are: 153 ±87 mL/min/100g and 162 ±70 mL/min/100 g, respectively; and medullary perfusion values of 69 ±74mL/min/100 g, and 67 ±62 mL/min/100 g, respectively. Moreover, the cortical andmedullary perfusion value discrepancy is 6.78% and 18.31%, respectively.3. DiscussionThe approach proposed leads to a reliable renal perfusion estimation. Segmentation resultsbased on Mask R-CNN presents outstanding results, obtaining averaged DSC values above93%, outperforming the current state-of-the-art. The segmentation performance highlydepends on the intensity range of the images. Even if intensity rescale is applied, theheterogeneity of image intensity should be considered when testing with new data. As ex-pected, the results obtained for the cortex are better than the ones extracted for the medullacompartment. The segmentation performance of medullary tissue shows higher dissimilar-ities between manually drawn labels and automatically achieved ones. This discrepancyis mainly caused by the mislabeling of the medulla region, which tends to be less precisethan the segmentation of the cortex due to the low differentiation of interfaces and partialvolume effects. The method proposed also generates an uncertain class mask in areas wherethe differentiation between cortex and medulla pixels is not clear, that could be processedin further steps to complete cortical and medullary masks, and indeed, the estimation ofperfusion values. Regarding the estimation of renal perfusion, our work demonstrates thatmulticlass segmentations do have an effect on cortical and medullary RBF estimation.AcknowledgmentsProject PC181-182 RM-RENAL, supported by the Department of University, Innovationand Digital Transformation (Government of Navarra). The author would also like to ac-knowledge the Department of University, Innovation and Digital Transformation for thepredoctoral grant number 0011-0537-2021-000050.3Oyarzun-Dome ̃no et al.ReferencesWaleed Abdulla. Mask r-cnn for object detection and instance segmentation on keras andtensorflow. https://github.com/matterport/Mask_RCNN , 2017.Michael J. Field, David C. Harris, and Carol A. Pollock. Glomerular Filtration and AcuteKidney Injury. In Michael J. Field, David C. Harris, and Carol A. Pollock, editors, TheRenal System (Second Edition) , pages 57–67. Churchill Livingstone, January 2010. ISBN978-0-7020-3371-1. doi: 10.1016/B978-0-7020-3371-1.00005-1.Kaiming He, Georgia Gkioxari, Piotr Doll ́ ar, and Ross Girshick. Mask R-CNN, January2018.Kai Jiang and Lilach O. Lerman. Prediction of Chronic Kidney Disease Progression byMagnetic Resonance Imaging: Where Are We? American journal of nephrology , 49(2):111–113, 2019. ISSN 0250-8095. doi: 10.1159/000496160.Artur Klepaczko, Eli Eikefjord, and Arvid Lundervold. Healthy Kidney Segmentation inthe Dce-Mr Images Using a Convolutional Neural Network and Temporal Signal Charac-teristics. Sensors (Basel, Switzerland) , 21(20):6714, October 2021. ISSN 1424-8220. doi:10.3390/s21206714.Fabio Nery, Isky Gordon, and David L. Thomas. Non-Invasive Renal Perfusion ImagingUsing Arterial Spin Labeling MRI: Challenges and Opportunities. Diagnostics , 8(1):2,March 2018. ISSN 2075-4418. doi: 10.3390/diagnostics8010002.Fabio Nery, Charlotte E. Buchanan, Anita A. Harteveld, Aghogho Odudu, Octavia Bane,Eleanor F. Cox, Katja Derlin, H. Michael Gach, Xavier Golay, Marcel Gutberlet,Christoffer Laustsen, Alexandra Ljimani, Ananth J. Madhuranthakam, Ivan Pedrosa,Pottumarthi V. Prasad, Philip M. Robson, Kanishka Sharma, Steven Sourbron, ManuelTaso, David L. Thomas, Danny J. J. Wang, Jeff L. Zhang, David C. Alsop, Sean B.Fain, Susan T. Francis, and Mar ́ ıa A. Fern ́ andez-Seara. Consensus-based technical rec-ommendations for clinical translation of renal ASL MRI. Magma (New York, N.Y.) , 33(1):141–161, February 2020. ISSN 1352-8661. doi: 10.1007/s10334-019-00800-z.Aghogho Odudu, Fabio Nery, Anita A. Harteveld, Roger G. Evans, Douglas Pendse, Char-lotte E. Buchanan, Susan T. Francis, and Mar ́ ıa A. Fern ́ andez-Seara. Arterial spinlabelling MRI to measure renal perfusion: A systematic review and statement paper.Nephrology, Dialysis, Transplantation: Official Publication of the European Dialysis andTransplant Association - European Renal Association , 33(suppl 2):ii15–ii21, September2018. ISSN 1460-2385. doi: 10.1093/ndt/gfy180.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks forBiomedical Image Segmentation, May 2015.Xuehan Xiong and Fernando De la Torre. Supervised Descent Method and Its Applicationsto Face Alignment. In 2013 IEEE Conference on Computer Vision and Pattern Recog-nition , pages 532–539, Portland, OR, USA, June 2013. IEEE. ISBN 978-0-7695-4989-7.doi: 10.1109/CVPR.2013.75.4Automatic renal perfusion estimation using DL techniquesYao Zhang, Yixin Wang, Feng Hou, Jiawei Yang, Guangwei Xiong, Jiang Tian, and ChengZhong. Cascaded Volumetric Convolutional Network for Kidney Tumor Segmentationfrom CT volumes, May 2020.5 |
dPWotG03R-h | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionInvestigate Sex Dimorphism of Cerebral Myelination AcrossLifespan by Leveraging Conditional Variational AutoencoderJinghang Li1jil202@pitt.eduLinghai Wang1L.Wang@pitt.eduChang-le Chen1chc348@pitt.eduTamer Ibrahim1,2,tibrahim@pitt.eduHoward Aizenstein1,2aizen@pitt.eduMinjie Wu1,2miw75@pitt.edu1Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA2Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USAEditors: Under Review for MIDL 2023AbstractIn this work we investigated the potential sex differences in white matter aging usingconditional variational autoencoder (cVAE) on myelin content MR images. The cVAEmodel was trained along with a supervised brain age prediction model, which learns therepresentation of myelination aging process within a single end-to-end model architecture.The training was conducted on a normal aging dataset (CamCAN) that included 708individual MR images. Our brief exploration revealed that women might have slightlyless white matter myelination than men do at an older age. Additionally, our brain ageprediction model suggested different aging regressions for men and women.Keywords: Conditional VAE, Brain age prediction, Sex differences, Myelin map1. IntroductionMany efforts have been made in leveraging generative models such as variational autoen-coders (VAE) in neuroimaging analysis (Zhao et al., 2019). These models are capable oflearning the complex distributions of heterogeneous neuroimages in an unsupervised man-ner. In this work, we investigated the white matter aging differences between men andwomen by using conditional variational autoencoder (cVAE) on myelin content MR im-ages called T1-w/T2-w ratio (Li et al., 2021). In contrast to the traditional VAE model,we incorporated a supervised brain age prediction model in the overall training process toconstrain the conditional distribution in the latent space. This unique design can not onlyprovide a brain age prediction model but also learn the representation of myelination agingprocess within a single end-to-end model architecture.2. Method and DataWe used the publicly available CamCAN dataset that included 708 (359 female, 349 male)individual T1-w and T2-w MR structural scans acquired at 3 Tesla magnetic field strength.Both T1-w and T2-w MR images are of 1-mm isotropic resolution. T1-w/T2-w ratio©2023 CC-BY 4.0, J. Li, L. Wang, C.-l. Chen, T. Ibrahim2,, H. Aizenstein2& M. Wu2.Li Wang Chen Ibrahim2,Aizenstein2Wu2Figure 1: Generated ratio maps from separately trained male and female cVAE models conditionedon age. We observed significant ventricle enlargement across sex as well as a distinctive decrease inmyelin content with aging. Additionally, female might have slightly worse white matter health thanmale does at an older age. The regions with less myelin content were circled in the figure.images were acquired following the pipeline established in (Li et al., 2021). We thenbuilt the 3D cVAE model that encoded input images of size (160x160x160). In our gen-erative model, p(xn, zn, cn), we have X={x(1), x(2)...x(n)},C={c(1), c(2)...c(n)}, andZ={z(1), z(2)...z(n)}, where X is the training images, C is the subjects’ chronological age,and Z is the encoded latent space vectors. With the additional brain age predictor, we thenhave the following loss function:L=−DKL(q(z|x, c)||p(z|c)) + Eq(z|x,c)[log p(x|z, c)] +LMSE (1)whereLMSE = Σ n(ˆyn−yn)2, with ynbeing the chronological age and ˆynbeing thepredicted age. Unlike vanilla VAE, in the cVAE model, the latent representation vector zwas sampled from a conditioned distribution p(z|c), where c is the one-hot encoded vectorfor age. To better disentangle the latent vector on both age and sex, we trained one modelon male images only and one on female images only. By leveraging the reparameterizationtrick (Zhao et al., 2019), the cVAE model was trained with a learning rate of 0.0003 for 200epochs. The model was implemented using PyTorch, and the training was carried out onNVIDIA A100 40GB at the University of Pittsburgh Center for Research Computing.3. Results and DiscussionIn this study, we explored the sex differences between men and women on white mattermyelination maps using a normal aging dataset. Figure 1 shows the aging patterns aswell as the sex differences in white matter integrity generated by the model after training.Distinctively, ventricle size increases drastically with age. Moreover, we discovered thedecrease in myelin sheath as we age across sex; particularly, women might have slightlyless white matter myelination than men do at an older age. Figure 2 shows the brain age2cVAE Reveals Sex Differences in White Matter Agingprediction result using the trained male model conducting inferences on both male andfemale ratio maps. The two fitted regression models suggest different aging pattern for menand women. Specifically, the results indicate that women ratio images appear to be olderthan that of men at an older age. Further region of interest (ROI) based analysis should beconsidered to reveal the specific underlying aging distinctions in terms of sex dimorphism.This study provided a brief investigation on white matter myelination across the lifespanand between sex. The results suggest that generative models such as conditional VAE canalso serve as a normative model to quantify the spatial-temporal patterns of brain aging onmyelination.Figure 2: Fitted linear regression models on predicted brain age using both male and female ratioimages. The inference was done on the trained male age prediction model. The fitted regressionmodels suggest that female myelin content appears to be older than that of male.AcknowledgmentsThis research was supported in part by the University of Pittsburgh Center for ResearchComputing. Specifically, this work used the GPU cluster which is supported by NIH awardnumber R01-AG063525ReferencesCathy Meng Fei Li, Powell PW Chu, Peter Shih-Ping Hung, David Mikulis, and MojganHodaie. Standardizing t1-w/t2-w ratio images in trigeminal neuralgia to estimate thedegree of demyelination in vivo. NeuroImage: Clinical , 32:102798, 2021.Qingyu Zhao, Ehsan Adeli, Nicolas Honnorat, Tuo Leng, and Kilian M Pohl. Variationalautoencoder for regression: Application to brain aging analysis. In Medical Image Com-puting and Computer Assisted Intervention–MICCAI 2019: 22nd International Confer-ence, Shenzhen, China, October 13–17, 2019, Proceedings, Part II 22 , pages 823–831.Springer, 2019.3 |
xBfUkTq17h7 | Medical Imaging with Deep Learning – 2023 Short Paper – MIDL 2023 submissionEnd-to-End Spermatozoid Detection in Cytology WSI forForensic Pathology WorkflowRutger H.J. Fick∗rfick@tribun.healthM ́ elanie Lubrano∗mlubrano@tribun.healthDamien Quignon†damien.quignon@gendarmerie.interieur.gouv.frDidier Empis†didier.empis@gendarmerie.interieur.gouv.frFabrice Kabile†fabrice.kabile@gendarmerie.interieur.gouv.frSaima Ben Hadj∗sbenhadj@tribun.healthEditors: Accepted for publication at MIDL 2023AbstractThis study aimed to improve the sensitivity and throughput of spermatozoid screeningfor identifying rape suspects through DNA profiling, based on microscope cytology WholeSlide Imaging (WSI). To this end, we implemented a WSI-based deep-learning algorithmconsisting of a detector/classification ensemble, achieving a mean 3-fold cross-validationF1 score of 0.87 [0.87-0.88] on a dataset of 188 retrospective single-center cytology WSI.Applied to slide label-only annotated test set (positive, negative, and doubtful), we showthat our ensemble model is capable of screening slide label groups with excellent sensitivityto even find missed spermatozoids in negative-labeled slides. We hope our approach willbe of value for routine forensic spermatozoid screening.Keywords: Spermatozoid screening, Whole Slide Images (WSI), Deep learning, Forensiclaboratories, Object detection1. IntroductionSpermatozoid detection via cytological slide examination is a standard procedure in foren-sic laboratories, aiming to identify sexual assault suspects through DNA analysis of spermretrieved from victims. The conventional microscope-based evaluation of slides is labor-intensive and susceptible to false-negative labeling, particularly when only a single sper-matozoid is present. In order to enhance the screening sensitivity and efficiency, we haveintegrated a deep learning-driven digital method for detecting spermatozoids on Whole SlideImages (WSI) of cytology specimens.Numerous guidelines outline the ideal methods for collecting potential sperm-containingsample (Suttipasit, 2019), as well as various techniques to obtain forensic evidence for thepresence of spermatozoids (Stefanidou et al., 2010). In the context of microscope-basedspermatozoid detection, the scarcity of spermatozoids and the diverse origins of samplecollection (e.g., vagina, anus, hair, clothing) lead to a wide range of debris present inWSI, complicating the accurate detection of spermatozoids (see, for example, Figure 1Given that the primary objective of screening is to identify even a single spermatozoid∗Tribun Health, Paris, France†Institut de Recherche Criminelle, Gendarmerie Nationale, France©2023 CC-BY 4.0, R.H. Fick, M. Lubrano, D. Quignon, D. Empis†, F. Kabile†& S. Ben Hadj.Fick Lubrano Quignon Empis†Kabile†Ben HadjFigure 1: Left: Example spot macro-images. Right: Example sperm objects showing debris.within the WSI, our deep learning-based approach demonstrates its ability to locate rareand previously overlooked spermatozoids, thereby proving valuable for incorporation intoforensic workflows.2. Materials and MethodsIn this section we describe the dataset creation in Section 2.1 and training approach for ourend-to-end spermatozoid detection algorithm in Section 2.2.2.1. DataOur dataset consists of 188 retrospective single-center cytology WSI – 50 for training and138 for testing – containing HE-stained samples samples recovered from a representativesource variety. We used a 2-step approach for annotating spermatozoids, first having a”weak” model trained on manual annotations detect spermatozoid candidates. Then thesecandidates were validated using through a 2+1 expert consensus, resulting in 6425 annotatedspermatozoids and 12464 (hard) negatives. The test set is split between 45 positive, 42doubtful, and 51 negative slide labels. Note that a doubtful classification just means thatthe reader suspects some objects in the WSI are spermatozoids but is not sure, and leaves thefinal decision of doing a DNA profiling to a secondary reader. The average spermatozoidcontent of negative, doubtful, and positive WSI is expected to be none, low and high,respectively. In Figure 1 we show an illustration of some representative cytology spots andannotated spermatozoids.2.2. Model TrainingWe trained a detector/classification ensemble model for object detection, combining a Yolo-RD6 detector (Wang et al., 2022) and EfficientNet-B7 classifier (Tan and Le, 2019). The2End-to-End Spermatozoid DetectionF1 ScoreTraining Detector Classifier Ensemble1 0.80 0.84 0.872 0.81 0.85 0.883 0.79 0.84 0.87Figure 2: Left: F1 scores of three cross-validation trainings. Right: Test set evaluation,showing per slide label group how many spermatozoids were detected in logarith-mic scale.detector model was trained by randomly sampling positive and negative examples 45% ofthe time and random locations within the foreground 10%. To create a strong ensemble, thedetector network was then used to run inference on all the training slides and all false positivedetections were then used as negatives for training the classifier model (Piansaddhayanaonet al., 2023).3. ResultsThe ensemble model demonstrates a mean 3-fold cross-validation F1 score of 0.87 [0.87-0.88]. When applied to the test set, which only contains slide labels, our model successfullyidentified previously undetected spermatozoids in 5 out of 51 negative slides, confirmedspermatozoid presence in 32 out of 42 doubtful slides, and verified the presence of sper-matozoids in 44 out of 45 positive slides. Figure 2 illustrates these results, emphasizingthat the detection of spermatozoids in negative slides typically pertains to rare, overlookedobjects within an entire WSI. An expert cytologist subsequently confirmed these findings.In terms of practical time considerations, the digital workflow substantially reduced theprocessing time for a typical caseload of 50 cases, decreasing it from a week to approximately2 hours of computation time, in addition to the verification of high-scoring objects. Thisreduction in time highlights the efficiency of our proposed method.4. Discussion and ConclusionOur deep-learning based, end-to-end workflow for object detection shows acceptable per-formance for the spermatozoid screening task. Our next step is to evaluate the routineintegration of our algorithm in the standard forensic pathology workflow.ReferencesChawan Piansaddhayanaon, Sakun Santisukwongchote, Shanop Shuangshoti, Qingyi Tao,Sira Sriswasdi, and Ekapol Chuangsuwanich. Recasnet: Improving consistency within the3Fick Lubrano Quignon Empis†Kabile†Ben Hadjtwo-stage mitosis detection framework. Artificial Intelligence in Medicine , 135:102462,2023.M Stefanidou, G Alevisopoulos, and C Spiliopoulou. Fundamental issues in forensic semendetection. West indian medical journal , 59(3), 2010.Papanu Suttipasit. Forensic spermatozoa detection. The American Journal of ForensicMedicine and Pathology , 40(4):304–311, 2019.Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neuralnetworks. In International conference on machine learning , pages 6105–6114. PMLR,2019.Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainablebag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprintarXiv:2207.02696 , 2022.4 |
hhCvgZQQbQ5 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionSegmentation of seventy-one anatomical structures necessaryfor the evaluation of guideline-conform clinical targetvolumes in head and neck cancersAlexandra Walter1,2,3alexandra.walter@kit.eduGoran Stanic1,2,4goran.stanic@dkfz-heidelberg.dePhilipp Hoegen2,5,6,7Philipp.Hoegen@med.uni-heidelberg.deSebastian Adeberg2,5,6,7,8,9Sebastian.Adeberg@med.uni-heidelberg.deOliver J ̈ akel1,2,9o.jaekel@dkfz-heidelberg.deMartin Frank3martin.frank@kit.eduKristina Giske1,2k.giske@dkfz-heidelberg.de1Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ),Im Neuenheimer Feld 280, Heidelberg, Germany2Heidelberg Institute of Radiation Oncology (HIRO) & National Center for Radiation Research inOncology (NCRO), Heidelberg, Germany3Karlsruhe Institute of Technology, Steinbuch Center for Computing, Hermann-von-Helmholtz-Platz1, Eggenstein-Leopoldshafen, Germany4Department of Physics and Astronomy, University of Heidelberg, Heidelberg, Germany5Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany6Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Heidel-berg, Germany7National Center for Tumor diseases (NCT), Heidelberg, Germany8German Cancer Consortium (DKTK), Partner Site Heidelberg, Heidelberg, Germany9Heidelberg Ion-Beam Therapy Center (HIT), Department of Radiation Oncology, Heidelberg Uni-versity Hospital, Heidelberg, GermanyEditors: Under Review for MIDL 2023AbstractExpert guidelines standardize the delineation of the clinical target volume by giving neigh-boring anatomical structures as boundaries. In this research, we have automated the seg-mentation of seventy-one most mentioned anatomical structures in the expert guidelines.For most structures there are no segmentation accuracies found in the literature. For oth-ers, the DICE improves in our research. The sDICE with a 2 mm tolerance shows clinicalacceptance for most structures. Overall, we set benchmarks for several structures of thehead and neck anatomy. With these segmentations the guideline-conformance of clinicaltarget volumes can be measured.Keywords: Expert guidelines, automatic segmentation, anatomical structures, multi-labelsegmentation, target volume delineation, head and neck cancer.1. IntroductionIn radiation therapy the precise delineation of the target volume is crucial for tumor control.Especially in the head and neck area, in which anatomical structures are closely neighboring©2023 CC-BY 4.0, A.W. , G. Stanic, P. Hoegen, S. Adeberg, O. J ̈ akel, M. Frank & K. Giske.Stanic Hoegen Adeberg J ̈akel Frank Giskeeach other, there is a fine line between tumor control and the sparing of healthy tissues.Consensus expert guidelines are accepted to standardize the segmentation of target volumes(Gr ́ egoire et al., 2018; Gregoire et al., 2014). The clinical target volume (CTV) comprisesthe tissues infiltrated by microscopic tumor cells and in the expert guidelines its outline isdescribed by bordering anatomical structures.Manual delineations are time-consuming and show large inter- and intra-observer vari-ability (van der Veen et al., 2019). While deep learning methods show good results onthe segmentation of organs at risk (OAR) (Nikolov et al., 2018), despite years of effort theprogress of CTV segmentation is heavily affected by the inconsistent manual delineations(Cardenas et al., 2018; Strijbis et al., 2022).Our previous work has shown that the rules and boundaries given by the expert guide-lines are not learned by a neural network during supervised training. Our new approachto the automatic delineation of CTVs is the direct automated application of the rules. Forthat, all necessary anatomical structures need to be segmented, then the relevant bound-aries can be extracted and combined based on the rules given by the expert guidelines toconstruct a guideline-conform CTV.Earlier studies on the segmentation of anatomical structures have only published resultson a small subset of necessary structures. We have investigated the precision to which allnecessary anatomical structures can be automatically segmented.2. MethodsFor the study, the most important seventy-one anatomical structures mentioned in the ex-pert guidelines were manually delineated by six trained observers with a standard operatingprocedure. Delineations were made for 104 calibrated planning CT scans aggregated fromfour different study cohorts. All patients gave informed consent. The study is approvedfrom the ethics committee XXX. Three different nnUnet models were trained with defaultparameters on disjunct subsets of the structures (Isensee et al., 2021). Structures that werenot present in the manual labels were neglected in the analysis.3. ResultsTable 1 shows the DICE and sDICE (Nikolov et al., 2018) with a 2 mm tolerance for theten most mentioned structures in both guidelines. The DICE results are comparable orbetter than the ones in the literature for the sternocleidomastoid muscle (0.73 ±0.02 inWeber et al. (2021)), the pharyngeal constrictor muscles and the carotid arteries (0.71 ±0.10 and 0.68 ±0.11 in Van Dijk et al. (2020)), and the pharynx (69.3 ±6.3 in Ibragimovand Xing (2017)). Only for the vertebra C1 there are better results found (approximately0.96 in Wasserthal et al. (2022)) which are not reproducible when their method is appliedto our own data (0.83 ±0.03). To our knowledge, for the DICE of all other structuresas well as the sDICE for all structures, there is no segmentation accuracy published. TheDICE and sDICE are comparable to the inter-observer variability calculated for a subset ofrepresentative structures delineated by two of our trained observers.The DICE and sDICE results for different groups of anatomical structures are givenin Figure 1. Groups of structures with better contrast (i.e. air, bones) show better results2Anatomical structures for guidelines-conform CTVsTable 1: Comparison of predicted and manual labels of ten anatomical structures. Mean ±standard deviation of the DICE and sDICE with a 2 mm tolerance.Structure DICE sDICEpharynx 0.82±0.08 0.89±0.07sternal manubrium 0.91±0.06 0.93±0.08hyoid bone 0.82±0.07 0.95±0.06vertebra C1 0.86±0.04 0.92±0.04thyroid cartilage 0.84±0.08 0.96±0.04cricoid cartilage 0.67±0.16 0.80±0.14sternocleidomastoid muscle 0.81±0.12 0.88±0.12scalenus muscle 0.72±0.10 0.84±0.10thyro-hyoid muscle 0.51±0.19 0.84±0.17pharyngeal constrictor muscles 0.60±0.14 0.81±0.10common carotid artery 0.79±0.11 0.92±0.08internal carotid artery 0.57±0.19 0.77±0.19than the other groups. The largest variation in both coefficients is shown in the largest andmost diverse group, the muscles, as well as in the smallest group summarizing the resultsof only two cartilages.Since the DICE is by design more sensitive to deviations in small structures, the lowestDICE scores in each group are scored by narrow structures. Except for the right internalcarotid artery, the sDICE of those structures does not show any saliencies. Because ofinconsistent labeling, the tonsils were excluded from the analysis.Air Bones Cart. Glands Muscl. Vessels0.40.50.60.70.80.91.0DICEAir Bones Cart. Glands Muscl. Vessels0.40.50.60.70.80.91.0sDICEFigure 1: Comparison of predicted and manual labels between groups of anatomical struc-tures with DICE and sDICE with 2 mm tolerance. Quantities per group were Air(6), Bones (11), Cartilages (2), Glands (3), Muscles (26), Vessels (11).3Stanic Hoegen Adeberg J ̈akel Frank Giske4. DiscussionThe question whether the segmentations are accurate enough to evaluate guideline-conformanceof a CTV is answered by the predominantly large values for the sDICE with 2 mm tolerance.In radiation therapy a deviation of 2 mm or less is considered acceptable for clinical use.One limitation is the inconsistent delineation of the tonsils and thus, their exclusion fromfurther use.Overall, this research establishes the first ever segmentation results for several structuresof the head and neck anatomy, improves the segmentation results of other structures andcollects all the results in one paper. These results will be a benchmark for following researchon automatic segmentation in the head and neck area.We can now automatically measure the degree of guideline-conformance of each singleCTV delineation by analyzing the overlap between a CTV and the anatomical structuresthat should be excluded from the CTV or included in the CTV with respect to the expertguidelines. Identifying critical cases will support clinicians in delivering more consistentCTV delineations for radiotherapy planning.AcknowledgmentsWe thank Susanne Labudek, Ishan Echampati, Dorothee Kahn, Ilsa Beig, Woojin Choi andSamira Hiller for manually segmenting the anatomical structures on the planning CT scansand discussing the neck node levels. Further, we thank the Helmholtz Information & DataScience School for Health for funding AW.ReferencesCarlos E Cardenas, Brian M Anderson, Michalis Aristophanous, Jinzhong Yang, Dong JooRhee, Rachel E McCarroll, Abdallah SR Mohamed, Mona Kamal, Baher A Elgohari,Hesham M Elhalawani, et al. Auto-delineation of oropharyngeal clinical target volumesusing 3d convolutional neural networks. Physics in Medicine & Biology , 63(21):215026,2018.Vincent Gregoire, Kian Ang, Wilfried Budach, Cai Grau, Marc Hamoir, Johannes A Lan-gendijk, Anne Lee, Quynh-Thu Le, Philippe Maingon, Chris Nutting, et al. Delineationof the neck node levels for head and neck tumors: a 2013 update. dahanca, eortc, hkn-pcsg, ncic ctg, ncri, rtog, trog consensus guidelines. Radiotherapy and Oncology , 110(1):172–181, 2014.Vincent Gr ́ egoire, Mererid Evans, Quynh-Thu Le, Jean Bourhis, Volker Budach, Amy Chen,Abraham Eisbruch, Mei Feng, Jordi Giralt, Tejpal Gupta, et al. Delineation of the pri-mary tumour clinical target volumes (ctv-p) in laryngeal, hypopharyngeal, oropharyngealand oral cavity squamous cell carcinoma: Airo, caca, dahanca, eortc, georcc, gortec, hkn-pcsg, hncig, iag-kht, lprhht, ncic ctg, ncri, nrg oncology, phns, sbrt, somera, sro, sshno,trog consensus guidelines. Radiotherapy and Oncology , 126(1):3–24, 2018.Bulat Ibragimov and Lei Xing. Segmentation of organs-at-risks in head and neck ct imagesusing convolutional neural networks. Medical physics , 44(2):547–557, 2017.4Anatomical structures for guidelines-conform CTVsFabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein.nnu-net: a self-configuring method for deep learning-based biomedical image segmenta-tion. Nature methods , 18(2):203–211, 2021.Stanislav Nikolov, Sam Blackwell, Alexei Zverovitch, Ruheena Mendes, Michelle Livne,Jeffrey De Fauw, Yojan Patel, Clemens Meyer, Harry Askham, Bernardino Romera-Paredes, et al. Deep learning to achieve clinically applicable segmentation of head andneck anatomy for radiotherapy. arXiv preprint arXiv:1809.04430 , 2018.Victor IJ Strijbis, Max Dahele, Oliver J Gurney-Champion, Gerrit J Blom, Marije RVergeer, Berend J Slotman, and Wilko FAR Verbakel. Deep learning for automatedelective lymph node level segmentation for head and neck cancer radiotherapy. Cancers ,14(22):5501, 2022.Julie van der Veen, Akos Gulyban, and Sandra Nuyts. Interobserver variability in delin-eation of target volumes in head and neck cancer. Radiotherapy and Oncology , 137:9–15,2019.Lisanne V Van Dijk, Lisa Van den Bosch, Paul Aljabar, Devis Peressutti, Stefan Both,Roel JHM Steenbakkers, Johannes A Langendijk, Mark J Gooding, and Charlotte LBrouwer. Improving automatic delineation for head and neck organs at risk by deeplearning contouring. Radiotherapy and Oncology , 142:115–123, 2020.Jakob Wasserthal, Manfred Meyer, Hanns-Christian Breit, Joshy Cyriac, Shan Yang, andMartin Segeroth. Totalsegmentator: robust segmentation of 104 anatomical structures inct images. arXiv preprint arXiv:2208.05868 , 2022.Kenneth A Weber, Rebecca Abbott, Vivie Bojilov, Andrew C Smith, Marie Wasielewski,Trevor J Hastie, Todd B Parrish, Sean Mackey, and James M Elliott. Multi-muscle deeplearning segmentation to automate the quantification of muscle fat infiltration in cervicalspine conditions. Scientific reports , 11(1):16567, 2021.5 |
npxsTyiJ37 | Medical Imaging with Deep Learning 2023On the robustness of regressing tumor percentage as anexplainable detector in histopathology whole-slide imagesMarina D’Amato marina.damato@radboudumc.nlMaschenka Balkenhol maschenka.balkenhol@radboudumc.nlMart van Rijthoven mart.vanrijthoven@radboudumc.nlJeroen van der Laak jeroen.vanderlaak@radboudumc.nlFrancesco Ciompi francesco.ciompi@radboudumc.nlDepartment of Pathology, Radboudumc, Nijmegen, NetherlandsAbstractIn recent years, Multiple Instance Learning (MIL) approaches have gained popularity toaddress the task of weakly-supervised tumor detection in whole-slide images (WSIs). How-ever, standard MIL relies on classification methods for tumor detection that require negativecontrol, i.e., tumor-free cases, which are challenging to obtain in real-world clinical scenar-ios, especially when considering surgical resection specimens. Inspired by recent work, inthis paper we tackle tumor detection via a MIL-like weakly-supervised regression approachto predict the percentage of tumor present in WSIs, a clinically available target that allowsto overcome the problem of need for manual annotations or presence of tumor-free slides.We characterize the quality of such a target by investigating its robustness in the presenceof noise on regression percentages and provide explainability through attention maps. Wetest our approach on breast cancer data from primary tumor and lymph node metastases.Keywords: Weakly-supervised learning, Explainability, Tumor detection, Histopathology1. IntroductionTumor detection in histopathology is a critical task of cancer diagnosis that can be partlyautomated with computer algorithms in several tissue types. However, training super-vised methods that rely on pixel or patch-level annotations can be challenging due to thetime-consuming nature of annotating histopathology images, which requires expertise frompathologists, especially when aiming at generic tumor detectors that can work across multi-ple types of tissue and cancers. As an alternative, weakly supervised methods via MultipleInstance Learning (MIL) have been proposed for binary classification problems (Ilse et al.,2018; Campanella et al.) using slide-level labels instead of manual annotations. One ofthe challenges of using binary classification methods is the need for a large dataset withboth positive (tumor present) and negative (tumor absent) cases. However, in real-worldclinical diagnostics, finding WSIs without tumor can be challenging, as most resected tissuespecimens typically contain some degree of tumor.Inspired by recent work (Lerousseau et al., 2021) where tumor percentages were used asweakly supervised targets to train a segmentation model, we propose a weakly-supervisedregression-based approach for estimating the percentage of tumor in a WSI. This approachallows us to formulate a tumor detection pipeline without being hampered by the lack ofnegative cases. This task is also particularly relevant for clinicians, as percentage estimationis performed by pathologists for cases where molecular pathology is conducted, and thereforelargely clinically available. However, these percentage estimations are often done visuallyand may be prone to noise. To ensure the robustness of our proposed framework, weconducted a target noise analysis to evaluate its performance under varying noise conditions.©2023 CC-BY 4.0, M. D’Amato, M. Balkenhol, M.v. Rijthoven, J.v.d. Laak & F. Ciompi.D’Amato Balkenhol Rijthoven Laak CiompiFigure 1: Example of attention heatmaps for the different models.2. MethodsWe based our approach on CLAM (Lu et al., 2021), which we extended to a regressionsetting, while keeping its two main components: 1) patch embedding to 1024 features usinga pretrained ResNet50, 2) aggregation of these features through attention-pooling.Figure 2: Distribution of la-bels before and after the am-plification trick.Data. We used two datasets: the publicly available Camelyon16(CAM16) dataset (Litjens et al., 2018), which includes 399 casesof lymph node images, and an internal dataset with 595 cases oftriple negative breast cancer (TNBC) surgical resections. We gen-erated the tissue masks using a tissue background segmentationalgorithm (B ́ andi et al., 2019). From the non-background regions,we extracted non-overlapping 256x256 patches at a spatial reso-lution of 0.5 μm, which we embedded and passed to an attentionmodel during training to assign attention scores. The models weretrained using mean squared error (MSE) loss with Adam optimizer,L2 weight decay of 1e-5 and a learning rate of 2e-4. We performed 5-fold cross-validationwith a stratified train/validation/test split (65/15/20) based on the continuous target, whilealso rotating the test set to ensure coverage of the entire dataset in the evaluation process.To determine the tumor percentages used as regression targets, different strategies wereemployed for the two datasets. For CAM16, we used the existing manual tumor annota-tions, while for TNBC, segmentation maps generated using the HookNet algorithm (vanRijthoven et al., 2021) were utilized.Regression model and “target amplification trick”. The TNBC dataset, whichonly contains positive cases with tumor percentages ranging from 2% to 67%, showcasesthe potential of our approach to be trained without the requirement of tumor-free slides.In contrast, CAM16 includes both tumor and normal slides, with the percentages rangingfrom 0% to 70%. However, many CAM16 slides have a small percentage of tumor, with 91slides having a percentage less than 1%. This narrow range of targets posed a challengefor our model to discriminate between tumor-free cases and slides with small lesions. Toaddress this issue, we applied a target amplification “trick” by taking the fifth root oftumor percentages (Figure 2), effectively boosting lower values and making it easier forthe model to discern subtle differences within this narrow range. We used two types oftraining approaches: (i) training and testing on each single dataset to analyze strengthsand limitations of the models; (ii) cross-training on both datasets and testing on eachindividual one to examine generalization performance to different datasets with varyingcharacteristics and distributional differences.On the effect of noisy targets. We also assessed models robustness to noisy targets,mimicking visual estimation error in the clinic, by training models after injecting noisesampled from a uniform distribution, which decreased or increased the tumor percentage2Short TitleTraining Experiment Pearson’s r MAETNBC True percentages 0.97 0.023TNBC 10% noise 0.94 0.033TNBC 30% noise 0.89 0.047TNBC 50% noise 0.76 0.075TNBC + CAM16 True percentages 0.96 0.025TNBC + CAM16 10% noise 0.94 0.034TNBC + CAM16 30% noise 0.86 0.052TNBC + CAM16 50% noise 0.73 0.079Table 1: Results obtained on TNBC dataset.Figure 3: Scatter plots comparing the true and predictedpercentages. Left: single training. Right: cross-training.by 10%, 30% and 50% respectively. After each training with noisy tagrets, we evaluatedthe performance of the models on the test sets, which do not contain any noise.3. Results and conclusionFigure 4: (a) CAM16 training - evaluation on all testsets. (b) CAM16 training - evaluation on official testset. (c) TNBC+CAM16 training - evaluation on alltest sets. (d) TNBC+CAM16 training - evaluationon official test set.Table 1 shows the results obtained on the TNBCdataset using the true percentages and the noisytargets for the two types of training respectively.Scatter plots in Figure 3 visually show the re-lationship between predicted and actual tumorpercentages for the two types of training. ROCcurves in Figure 4 demonstrate the performanceof the model in performing the tumor detectiontask on CAM16, evaluated on all test sets fromcross-validation and on the official test set sep-arately. The predictions on the official test setswere generated by different models ensuring thatthe test sets data were not part of the train-ing. Additionally, we can visualize the attentionscores to gain insights into the patches that werecrucial for the final prediction. Attention mapsin Figure 1 reveal that the amplification trick en-hances attention and improves visualization re-sults, even when noisy training targets are used.In conclusion, our proposed approach, using CLAM in a regression setting, has shownpromising results in addressing the task of tumor detection in WSIs without the need formanual annotations or tumor-free slides. Despite the expected decrease in performancewhen adding noise, the impact did not severely compromise the overall performance of themodels. This indicates that our approach is potentially robust enough to handle noisy tar-gets which may be encountered in real-world clinical scenarios and that tumor percentagescan be used as a target for future weakly-supervised tumor detection. Furthermore, thecross-training experiments on both datasets demonstrated that combining different tissuetypes did not substantially impact the performance, showcasing the versatility and adapt-ability of our approach across different tissue types. In the future, we aim at achievingrobust and accurate tumor detection across diverse cancer types by expanding the approachbeyond breast cancer as explored in this study.3D’Amato Balkenhol Rijthoven Laak Ciompi4. AcknowledgementsThis project has received funding from the Innovative Medicines Initiative 2 Joint Un-dertaking under grant agreement No 945358. This Joint Undertaking receives supportfrom the European Union’s Horizon 2020 research and innovation program and EFPIA(www.imi.europe.eu ).ReferencesP ́ eter B ́ andi, Maschenka Balkenhol, Bram van Ginneken, Jeroen van der Laak, and GeertLitjens. Resolution-agnostic tissue segmentation in whole-slide histopathology imageswith convolutional neural networks. PeerJ , 7:e8242, December 2019.Gabriele Campanella, Matthew G. Hanna, Luke Geneslaw, Allen Miraflor, Vitor WerneckKrauss Silva, Klaus J. Busam, Edi Brogi, Victor E. Reuter, David S. Klimstra, andThomas J. Fuchs. Clinical-grade computational pathology using weakly supervised deeplearning on whole slide images. 25:1301–1309. ISSN 1546-170X.Maximilian Ilse, Jakub M. Tomczak, and Max Welling. Attention-based deep multipleinstance learning. volume 80, pages 2127—-2136. Proceedings of the 35th InternationalConference on Machine Learning, 2018.Marvin Lerousseau, Marion Classe, Enzo Battistella, Th ́ eo Estienne, Th ́ eophraste Henry,Amaury Leroy, Roger Sun, Maria Vakalopoulou, Jean-Yves Scoazec, Eric Deutsch, andNikos Paragios. Weakly supervised pan-cancer segmentation tool. In Medical ImageComputing and Computer Assisted Intervention – MICCAI 2021 , pages 248–256. SpringerInternational Publishing, 2021.Geert Litjens, Peter Bandi, Babak Ehteshami Bejnordi, Oscar Geessink, Maschenka Balken-hol, Peter Bult, Altuna Halilovic, Meyke Hermsen, Rob van de Loo, Rob Vogels, Quirine FManson, Nikolas Stathonikos, Alexi Baidoshvili, Paul van Diest, Carla Wauters, Marcoryvan Dijk, and Jeroen van der Laak. 1399 h&e-stained sentinel lymph node sectionsof breast cancer patients: the CAMELYON dataset. GigaScience , 7(6), May 2018.Ming Y. Lu, Drew F. K. Williamson, Tiffany Y. Chen, Richard J. Chen, Matteo Barbieri,and Faisal Mahmood. Data-efficient and weakly supervised computational pathology onwhole-slide images. Nature Biomedical Engineering , 5(6):555–570, March 2021.Mart van Rijthoven, Maschenka Balkenhol, Karina Sili ̧ na, Jeroen van der Laak, andFrancesco Ciompi. HookNet: Multi-resolution convolutional neural networks for semanticsegmentation in histopathology whole-slide images. Medical Image Analysis , 68:101890,February 2021.4 |
e6B-OAcJfuD | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionCharacterizing Continual Learning Scenarios for TumorClassification in Histopathology ImagesVeena Kaustaban1veena.kaustaban@roche.comQinle Ba1qinle.ba@roche.comIpshita Bhattacharya1∗ipshita.sb.bhattacharya@gmail.comNahil Sobh1nahil.sobh@contractors.roche.comSatarupa Mukherjee1satarupa.mukherjee@roche.comJim Martin1jim.martin@contractors.roche.comMohammad Saleh Miri1saleh.miri@roche.comChristoph Guetter1christoph.guetter@roche.comAmal Chaturvedi1amal.chaturvedi@roche.com1Roche Sequencing Solutions, Santa Clara, CAEditors: Under Review for MIDL 2023AbstractDeep-learning models have achieved unprecedented performance in fundamental compu-tational tasks in digital pathology (DP) based analysis, such as image classification, celldetection and tissue segmentation. However, such models are known to suffer from catas-trophic forgetting when adapted to unseen data distribution with transfer learning. Withan increasing need for deep-learning models to handle ever-changing data distributions,including evolving patient population and new diagnosis assays, it is crucial to introducemethods for alleviating the such model forgetting. To this end, continual learning (CL)models are promising candidates. However, to our best knowledge, there’s no systematicstudy of CL models in DP-specific applications. Here, we propose various CL scenariosin DP settings, where histopathology image data from different sources/distributions ar-rive sequentially, the knowledge of which is integrated into a single model without trainingall the data from scratch. To benchmark the performance of recently proposed continuallearning algorithms in the proposed CL scenarios, We augmented a dataset for colorectalcancer H&E classification to simulate shifts of image appearance and evaluated CL methodson this dataset. Furthermore, we leveraged a breast cancer H&E dataset along with thecolorectal cancer dataset to assess continual learning methods for learning from multipletumor types. We revealed promising results of CL in DP applications, potentially pavingthe way for application of these methods in clinical practice.Keywords: Digital Pathology, Continual Learning, Tumor Tissue Classification1. IntroductionA general strategy to learn from multiple datasets considers training all the existing andnewly arrived data from scratch, leading to increasingly high demand of computing resource,time and data storage. On the other hand, updating a model with transfer learning is knownto be ineffective due to catastrophic forgetting (Kirkpatrick et al., 2017). On the contrary,continual learning algorithms (Kirkpatrick et al., 2017; Chaudhry et al., 2018; Lopez-Pazand Ranzato, 2017; Rebuffi et al., 2017) aim at efficiently and effectively adapting a model∗Work done at Roche©2023 CC-BY 4.0, V. Kaustaban et al.Kaustaban et al.to new data streams without forgetting of learned knowledge. These algorithms have largelybeen tested on less complex datasets from non-biomedical domains. Here, we report thefirst study to assess the feasibility of CL in DP-based applications (Kaustaban et al., 2022).Figure 1: Left: Example augmented CRC images from 5 domains. Right: Test accuracy ofoffline and online methods on augmented CRC against their corresponding baselines.2. MethodsDataset 1 - CRC. A colorectal cancer (CRC) dataset (Kather et al., 2019) was used togenerate simulated data streams, which comprise whole-slide H&E images (136 patients)with tile-wise class labels for 9 tissue types. We randomly selected 8700 images (7000 train;2700 validation) from each class for training and 7150 images for testing. Dataset 2 -PatchCam. To characterize the effectiveness of CL methods on learning from multiple tis-sue types, PatchCam from (Bejnordi et al., 2017) was used for breast cancer classificationafter stain-normalization (Macenko et al., 2009), which has class labels of “normal” and “tu-mor” from 400 breast cancer H&E whole-slide images. Augmented CRC - SimulatingFigure 2: Online and offline CL performance at each experience against offline baselines.Domain Shifted Data Streams. We generated an augmented CRC dataset to simulatethe commonly observed variations of stain appearance from multiple data sources (Fig. 1Left). Specifically, we first performed stain unmixing of CRC images and then applied aug-mentation in the optical density space before remixing the stain-unmixed intensity images.In addition to the original images (Domain 1), we generated images simulating different2Continual Learning for Tumor Classification in Histopathology Imagesdye concentrations for H&E stain (Domain 2: increased stain intensity), faded stains dueto slide aging (Domain 3: decreased eosin intensity) and changes of reagent manufacturer,scanners or stainers (Domain 4: change in hue; Domain 5: change in hue and saturation).Continual Learning with Augmented CRC. : We identified four CL scenarios suitablefor DP applications based on how new streams of data differ from the previous ones. Specif-ically, data streams contain (1) an equal number of images from all domains (Data-IL), (2)an equal number of images from one domain (Domain-IL), (3) images from new classesfrom all domains (Class-IL), or (4) images from new classes from all domains for each ofthe multi-task heads (Task-IL). Please see (Kaustaban et al., 2022) for more details. Con-tinual Learning Algorithms. : Recently proposed CL methods can be largely classifiedinto three categories: replay, regularization and parameter isolation (Vokinger et al., 2021).•Replay methods select original images (Rebuffi et al., 2017), deep representations(Van de Ven and Tolias, 2019) or model-generated pseudo samples (Shin et al., 2017)via various heuristics, which are stored in memory and replayed at later learningstages to overcome forgetting. iCaRL stores a subset of most representative examplesper class in memory. A-GEM constraints model updates by projecting the estimatedgradient on the direction determined by randomly selected samples from a replaybuffer. CoPE enables rapidly evolving prototypes with balanced memory and a lossfunction that updates the representations or pseudo prototypes.•Regularization methods include a regularization term in the loss function to penalizemodel updates that could lead to large deviation from an existing model to avoidforgetting of learned knowledge. EWC/Online EWC includes a regularization termin the loss function to penalize large changes to network weights that are important forprevious tasks. LwF distills knowledge from a previous model to an updated modeltrained on new data with an additional distillation loss term for replayed data.•Parameter isolation methods assign different model parameters to each task head.Note that parameter isolation only applies to Task-IL.3. ResultsTo our surprise, continual learning was most effective for the challenging Class-IL scenario(Figure 1 Right). Comparing offline CL methods, iCaRL had the best overall performance,even comparable to upper-bound joint training in the Class-IL scenario (Figure 1 Right: topblock, Figure 2). For online CL (Figure 1 Right: lower blocks), which learned from hundredsof small data batches in a near continuous stream, we observed that A-GEM outperformedupper-bound baseline in Task-IL scenario (Figure 2). However, online methods did notoutperform offline ones. We further assessed sequential learning from multiple tumor typesin domain-IL for CRC and breast cancer datasets and found that (a) adding more examplesand (b) “hard to easy curriculum” generated better results.In summary, we found that continual learning for DP, despite being challenging, isfeasible and promising. CL methods were computationally efficient, taking only about 28%of the runtime as joint training. Though patient data evolve quickly nowadays, FDA has notapproved algorithms based on CL (Vokinger et al., 2021) and extensive research is needed toestablish related regulations. Our evaluation approaches and proposed method to generatedomain-shifted datasets can potentially serve as the first step towards this goal.3Kaustaban et al.ReferencesBabak Ehteshami Bejnordi, Mitko Veta, Paul Johannes Van Diest, Bram Van Ginneken,Nico Karssemeijer, Geert Litjens, Jeroen AWM Van Der Laak, Meyke Hermsen, Quirine FManson, Maschenka Balkenhol, et al. Diagnostic assessment of deep learning algorithmsfor detection of lymph node metastases in women with breast cancer. Jama , 318(22):2199–2210, 2017.Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Ef-ficient lifelong learning with A-GEM. arXiv preprint arXiv:1812.00420 , 2018.Jakob Nikolas Kather, Johannes Krisam, Pornpimol Charoentong, Tom Luedde, EstherHerpel, Cleo-Aron Weis, Timo Gaiser, Alexander Marx, Nektarios A Valous, Dyke Ferber,et al. Predicting survival from colorectal cancer histology slides using deep learning: Aretrospective multicenter study. PLoS medicine , 16(1):e1002730, 2019.Veena Kaustaban, Qinle Ba, Ipshita Bhattacharya, Nahil Sobh, Satarupa Mukherjee, JimMartin, Mohammad Saleh Miri, Christoph Guetter, and Amal Chaturvedi. Character-izing continual learning scenarios for tumor classification in histopathology images. InMedical Optical Imaging and Virtual Microscopy Image Analysis: First InternationalWorkshop, MOVI 2022, Held in Conjunction with MICCAI 2022, Singapore, September18, 2022, Proceedings , pages 177–187. Springer, 2022.James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins,Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedingsof the national academy of sciences , 114(13):3521–3526, 2017.David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learn-ing. Advances in neural information processing systems , 30, 2017.Marc Macenko, Marc Niethammer, James S Marron, David Borland, John T Woosley, Xiao-jun Guan, Charles Schmitt, and Nancy E Thomas. A method for normalizing histologyslides for quantitative analysis. In 2009 IEEE international symposium on biomedicalimaging: from nano to macro , pages 1107–1110. IEEE, 2009.Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert.iCaRL: Incremental classifier and representation learning. In Proceedings of the IEEEconference on Computer Vision and Pattern Recognition , pages 2001–2010, 2017.Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deepgenerative replay. Advances in neural information processing systems , 30, 2017.Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXivpreprint arXiv:1904.07734 , 2019.Kerstin N Vokinger, Stefan Feuerriegel, and Aaron S Kesselheim. Continual learning inmedical devices: FDA’s action plan and beyond. The Lancet Digital Health , 3(6):e337–e338, 2021.4 |
HlkroJOY-J | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionLocal and global feature aggregation for accurate epithelialcell classification using graph attention mechanisms inhistopathology imagesAna Leni Frei1ana.frei@unibe.chAmjad Khan1amjad.khan@unibe.chLinda Studer1,2,3linda.studer@unifr.chPhilipp Zens1philipp.zens@unibe.chAlessandro Lugli1alessandro.lugli@unibe.chAndreas Fischer2,3andreas.fischer@hefr.chInti Zlobec1inti.zlobec@unibe.ch1Institute of Tissue Medicine and Pathology, University of Bern, Switzerland2Document, Image and Video Analysis Research Group, University of Fribourg, Switzerland3iCoSyS, University of Applied Sciences and Arts Western Switzerland, Fribourg, SwitzerlandEditors: Under Review for MIDL 2023AbstractIn digital pathology, cell-level tissue analyses are widely used to better understand tissuecomposition and structure. Publicly available datasets and models for cell detection andclassification in colorectal cancer exist but lack the differentiation of normal and malignantepithelial cells that are important to perform prior to any downstream cell-based analysis.This classification task is particularly difficult due to the high intra-class variability ofneoplastic cells. To tackle this, we present here a new method that uses graph-based nodeclassification to take advantage of both local cell features and global tissue architecture toperform accurate epithelial cell classification. The proposed method demonstrated excellentperformance on F1 score (PanNuke: 1.0, TCGA: 0.98) and performed significantly betterthan conventional computer vision methods (PanNuke: 0.99, TCGA: 0.92).Keywords: digital pathology, malignant epithelial cells, cell classification, cell-based graphs,node classification, graph attention, graph clustering1. IntroductionIn the field of digital pathology, recent advances in deep learning have led to the developmentof cell-level tissue analyses from hematoxylin and eosin (H&E) stained slides, opening thepossibility to conduct accurate quantitative analyses on whole slide images (WSI) that mightnot be possible in a clinical scenario with a microscopic assessment of slides. In the eraof personalized medicine, understanding the precise composition and the spatial histologicfeatures of the tissues is a key point to improve our understanding of tumor behavior (Baxiet al., 2022). Automated cell detection and classification on H&E WSI can be used toexplore specific cell-cell interactions and global tissue structure and organization (Ahmedt-Aristizabal et al., 2022; Pati et al., 2022).Although publicly available models and datasets for cell classification in colorectal cancerexist, these often lack the differentiation between normal and malignant epithelial cells, as©2023 CC-BY 4.0, A.L. Frei, A. Khan, L. Studer, P. Zens, A. Lugli, A. Fischer & I. Zlobec.Frei Khan Studer Zens Lugli Fischer Zlobecthey are usually grouped together as a single “epithelial” class (Graham et al., 2019, 2021).For downstream cell-based analyses, however, the distinction between these two epithelialcategories is of great importance. This task is particularly challenging because of the highmorphological heterogeneity of neoplastic epithelial cells. Nevertheless, when looking atthe overall epithelial gland architecture, the differentiation between normal and malignantglands is easier to make. Taking advantage of this, we propose a new method based onthe aggregation of local cell morphology features together with surrounding gland structureinformation to learn accurate epithelial-cell level classification from H&E images.2. Material and MethodsA subset of patches from the Lizard dataset (Graham et al., 2021) was selected that solelycontain either normal or malignant epithelial cells. Additional patches (1000 ×1000px) fromscanned colorectal tissue slides from the Institute’s cohort and from TCGA were retrievedand annotated for epithelial cells by experts. The TCGA and PanNuke (a subset of Lizard)patches were kept as test data while the remaining patches were used for training andvalidation. All patches were extracted at 20X magnification (0.5 μm/pixel). The datasetdescription can be found in Table 1.First, epithelial cell tiles (128 ×128px) were extracted around the cell centroids. ResNet18(He et al., 2016) and ViT16 (Dosovitskiy et al., 2021) were trained for normal versus ma-lignant cell classification as baseline. ResNet18 showed a considerably better performancethan ViT16 on TCGA, and was thus selected as node feature extractor when building thegraphs (see Table 2).Epithelial cell graphs were built on the training patches. Nodes were individual ep-ithelial cells. Node features were extracted from the last hidden layer of the previouslytrained ResNet18 to describe the cell morphology. Edges were built using Delaunay tri-angulation (Delaunay, 1934) to connect the nodes (epithelial cells). The best performingGraph Neural Network (GNN) architecture was obtained by testing and optimizing thefollowing parameters using a 5-fold cross-validation: the type of Message Passing function(MP) [GCN, GraphSage, GIN, GATv2] (Kipf and Welling, 2017; Hamilton et al., 2018; Xuet al., 2019; Brody et al., 2022), the number of MP layers [2, 3, 4] and the size of the MPlayers [64, 128, 256, 512]. The edge length threshold was also optimized.Following the graph-based classification, some single misclassified nodes remained, Fig-ure 1 step 4. Thus, a final post-processing (pp) step was applied to smooth the graphpredictions. As individual epithelial glands are expected to be composed of either normalor malignant epithelial cells, single glands were isolated into subgraphs using a short edgedatasetsepithelialtype#patches #cellsaverage#cells/patchtrain data(Lizard + Institute)normal 68 66,034 797malignant 107 119,013 1093test data(PanNuke + TCGA)normal 14 9,301 664malignant 17 12,335 726both 5normal: 3,450 690malignant: 5,362 1,072Table 1: Overview of the number of patches and cell types in the different subsets.2Epithelial cell graphFigure 1: Epithelial cell classifica-tion pipeline on a nor-mal epithelial tissue ex-ample from TCGA test-set. Red dots indicate ep-ithelial cells classified asmalignant and yellow dotscells classified as normal.Black lines indicate edgesbetween connected cellswhen applying graph clas-sification and graph clus-tering.Table 2: Weighted F1 score.Model PanNuke TCGAViT16 99.099 89.469ResNet18 99.039 91.716GAT 99.423 95.074GAT + pp 100.0 97.847threshold of 30px(15 μm). For every single epithelial gland, a median filter was applied toall cells in that gland to get the final cell class. Each step is illustrated in Figure 1.3. ResultsNormal cells were frequently misclassified as malignant in denser cell regions (such as ep-ithelial crypt base) by standard CNN methods. Using graph-based cell classification solvedthis issue and significantly improved the cell classification ( p <0.05) as can be seen in Table2. The best-performing method was a GNN composed of 4 GATv2 layers of size 256. Thefinal clustering and filtering further improved the cell classification.4. Discussion and ConclusionWe show that graphs allow us to effectively capture the structural context around the cellof interest for accurate epithelial cell classification in colorectal H&E images. The proposedgraph-based model is highly accurate and can be easily applied in addition to any othermodel detecting epithelial cells to further differentiate between normal and malignant cells.AcknowledgmentsThis work was funded by the Swiss National Science Foundation (CRSII5 193832). Resultspresented here are based on data provided by the TCGA Research Network.3Frei Khan Studer Zens Lugli Fischer ZlobecReferencesDavid Ahmedt-Aristizabal, Mohammad Ali Armin, Simon Denman, Clinton Fookes, andLars Petersson. A survey on graph-based deep learning for computational histopathology.Computerized Medical Imaging and Graphics , 95:102027, 2022. ISSN 0895-6111.Vipul Baxi, Robin Edwards, Michael Montalto, and Saurabh Saha. Digital pathology andartificial intelligence in translational medicine and clinical practice. Modern pathology , 35(1):23–32, 2022.Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks?,2022.B. Delaunay. Sur la sph` ere vide. a la m ́ emoire de georges vorono ̈ ı. Bulletin of Academy ofSciences of the USSR , 6:793–800, 1934. ISSN 1361-8415.Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, SylvainGelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformersfor image recognition at scale, 2021.Simon Graham, Quoc Dang Vu, Shan E Ahmed Raza, Ayesha Azam, Yee Wah Tsang,Jin Tae Kwak, and Nasir Rajpoot. Hover-net: Simultaneous segmentation and classi-fication of nuclei in multi-tissue histology images. Medical Image Analysis , 58:101563,2019.Simon Graham, Mostafa Jahanifar, Ayesha Azam, Mohammed Nimir, Yee Wah Tsang,Katherine Dodd, Emily Hero, Harvir Sahota, Atisha Tank, Ksenija Benes, Noorul Wa-hab, Fayyaz Minhas, Shan E Ahmed Raza, Hesham El Daly, Kishore Gopalakrishnan,David Snead, and Nasir Rajpoot. Lizard: A large-scale dataset for colonic nuclear in-stance segmentation and classification. in 2021 IEEE/CVF International Conference onComputer Vision Workshops (ICCVW) , pages 684–693, 2021.William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning onlarge graphs, 2018.Kaiming He, Xiangyu Zhang, Shaoqing Reny, and Jian Sun. Deep residual learning forimage recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR) , pages 684–693, 2016.Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutionalnetworks, 2017.Pushpak Pati, Guillaume Jaume, Antonio Foncubierta-Rodr ́ ıguez, Florinda Feroce,Anna Maria Anniciello, Giosue Scognamiglio, Nadia Brancati, Maryse Fiche, EstelleDubruc, Daniel Riccio, Maurizio Di Bonito, Giuseppe De Pietro, Gerardo Botti, Jean-Philippe Thiran, Maria Frucci, Orcun Goksel, and Maria Gabrani. Hierarchical graphrepresentations in digital pathology. Medical Image Analysis , 75:102264, 2022. ISSN1361-8415.4Epithelial cell graphKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graphneural networks?, 2019.5 |
J06Ap1NYWE | Medical Imaging with Deep Learning 2023Interactive Cell Detection in H&E-stained slides of DiffuseGastric CancerRobin Lomans robin.ooijer@radboudumc.nlRachel van der Post chella.vanderpost@radboudumc.nlFrancesco Ciompi francesco.ciompi@radboudumc.nlDepartment of Pathology, Radboudumc, Nijmegen, the NetherlandsAbstractWe present an interactive detection model to improve the cell annotation workflow of diffusegastric cancer. The model relates image and user inputs and is trained to detect three typesof cells in diffuse gastric cancer histology. We measure model multi-class cell detection per-formance as per-class F1 score and we show that it increases with the number of user inputclicks. Moreover, we show that the proposed interactive annotation approach substan-tially reduces the number of required user actions needed for complete image annotation,achieving a 17% reduction for the multi-class case. Future work will implement an iterativeapproach to filter out recurring false positives for further performance improvement.Keywords: Interactive detection, signet ring cells, diffuse gastric cancer, interactive an-notation.1. IntroductionDiffuse gastric cancer (DGC) is a type of cancer characterized by diffusely infiltrating signetring cell (SRC) carcinomas, which are difficult to detect due to their focality and diffuse in-filtration between normal mucosal epithelial cells. To improve DGC diagnostics, we proposeusing Deep Learning (DL) to aid with the detection of relevant cell types in H&E-stainedslides of gastric biopsies and resections.Training fully-supervised DL methods requires exhaustive cell annotations, which isexpensive and places a large burden on expert pathologists. To address this issue, wepropose using interactive object detection , a technique that involves both humans and AIin the annotation process. Specifically, we build on the recent work of (Lee et al., 2022)who proposed a generic interactive multi-class object detection method called C3Det . Theresulting workflow consists of an annotator making a few initial annotations (clicks) in theimage, followed by the C3Det model making object proposals based on these user inputclicks. Subsequently, the annotator can confirm or deny the proposals, and make additionalannotations. This process can be repeated iteratively until annotation is completed.To achieve this workflow, a generic convolutional neural network architecture is devel-oped that takes as input an image and additionally a set of user inputs . In our work, weimplemented this framework for the detection of relevant cell types in DGC. To this end, wetrained C3Det on a dataset consisting of H&E whole-slide images (WSIs) of DGC patients,evaluated the resulting model and showed that it substantially decreases the amount of useractions required to achieve exhaustive annotation.©2023 CC-BY 4.0, R. Lomans, R. van der Post & F. Ciompi.Lomans van der Post CiompiFigure 1: Example patches from the training set, with reference standard annotations inthe color scheme src: red, src s: green, pc: blue.2. MethodologyData. We collected a set of 108 H&E-stained WSIs of 9 different patients which werecollected between 2020 and 2022 at Radboud university medical center, the Netherlands.Slides were scanned at 0.25 micron per pixel spacing. The 9 patients are split across train(4), validation (3) and test (2) sets. To develop the model we used a fully manual annotationapproach where four annotators exhaustively annotated three cell types in the 108 WSIs:classical SRCs (src), small SRCs (src s), and poorly differentiated cells (pc). Annotationsconsist of point annotations placed near the center of a cell. Training the C3Det modelrequires bounding box annotations, however. To convert the point annotations to boundingboxes, we developed and applied a conversion pipeline using NuClick (Alemi Koohbananiet al., 2020). Next, regions of interest were extracted from the WSIs, which were furtherdivided into tiles of 1200 ×1200 pixels. The resulting dataset contains 629, 21, and 120patches in the train, validation, and test sets, respectively. Three example patches areshown in Figure 1.Model development and validation. The trained model uses a Faster-RCNN archi-tecture with a ResNet-50 backbone. We trained for 60 epochs with a starting learning rateof 0.0001 and a linear decay schedule, decreasing the learning rate by a factor of 10 atepochs 15, 30, and 45. To validate the model, we performed inference on the held-out testset after simulating multiple values of user clicks ( noc) by sampling annotations from themanual reference standard, and calculated the F1 scores for each value of noc. Furthermore,we investigated the advantage of using the model in an interactive annotation workflow bycomparing the required user actions to reach full annotation in the manual and interac-tive approaches. The interactive approach consists of a single iteration of the interactiveworkflow: an annotator makes nocinitial clicks, the model makes proposals based on theseinputs, and the annotator removes the false positives and adds annotations for the falsenegatives. In this approach, we calculated the number of required user actions as the sumof initial clicks, false positives, and false negatives, and we report the difference betweenthis metric and the number of ground truths for all noc. This difference represents thereduction in user actions to reach full annotation.2Interactive Cell Detection in Diffuse Gastric Cancer(a) (b) (c)Figure 2: (a): Maximum F1 score as a function of number of user input clicks.(b): Reduction of required user actions to achieve full annotation, comparingthe interactive to the manual approach. Results are presented for up to 14 initialclicks, as a further increase in the number of clicks leads to an increase in requireduser actions.(c): Average number of false positives per image.3. ResultsWe measured the performance of the model in terms of the F1 score for each class. Themaximum F1 score is shown in Figure 2( a) as a function of the number of user input clicks.As expected, F1 scores increase with more user input. However, we observe a substantialdifference in performance for poorly differentiated cell detection compared to the otherclasses, which can be attributed to the unbalanced dataset and the difficulty of detectingthis class. The advantage of using our interactive annotation approach is demonstrated inFigure 2( b), where we see a substantial reduction in the number of required user actionsto achieve full annotation. The maximum reduction is 17% for the multi-class case, 29%if we combine the src and src s classes, and 41% if we discard the pc class. Interestingly,the largest reduction is achieved for a low number of initial user inputs. This is becausethe average number of false positives does not decrease as we increase the number of userinputs, as shown in Figure 2( c). This behavior may also limit the overall model performanceas measured by the F1 score.4. ConclusionIn this work we present a novel approach to improve the annotation workflow for the de-tection of signet ring cells and poorly differentiated cells in slides of patients with DGC. Toour knowledge, this is the first model developed for the interactive detection of these celltypes. The model achieves promising performance and is shown to be able to substantiallyreduce the effort required to annotate signet ring cell carcinomas.The reduction in user actions shown in Figure 2( b) assumes a single iteration of modelinference. In an ideal setting, however, multiple iterations of the model would run, withthe annotator only removing false positives (FPs) after each iteration and feeding the newpredictions as user inputs to the next iteration. Additionally, a filter could be implementedto remove recurring FPs after the annotator marks them as such. This approach couldalleviate the issue of increasing FPs with an increasing number of user inputs illustrated inFigure 2( c), and result in further reduction of annotation costs.3Lomans van der Post CiompiAcknowledgmentsThis research was supported by an unrestricted grant of Stichting Hanarth Fonds, TheNetherlands. In addition, this project has received funding from the European Union’sHorizon 2020 research and innovation programme under grant agreement No 825292 (Ex-aMode, htttp://www.examode.eu/).ReferencesNavid Alemi Koohbanani, Mostafa Jahanifar, Neda Zamani Tajadin, and Nasir Rajpoot.NuClick: A deep learning framework for interactive segmentation of microscopic images.Med. Image Anal. , 65(101771):101771, October 2020.Chunggi Lee, Seonwook Park, Heon Song, Jeongun Ryu, Sanghoon Kim, Haejoon Kim,S ́ ergio Pereira, and Donggeun Yoo. Interactive multi-class tiny-object detection. In 2022IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages14116–14125, 2022. doi: 10.1109/CVPR52688.2022.01374.4 |
JY4oJg6-gc | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionCSGAN: a consistent structural GAN for AS-OCT imagedespeckling by image translationAuthor name(s) withheld email(s) withheldAddress withheldEditors: Under Review for MIDL 2023AbstractAnterior segment optical coherence tomography (AS-OCT) is a recent imaging techniquefor visualizing the physiological structure of the anterior segment. The speckle noise inher-ited in ASOCT images degrades the visual quality and hampers the subsequent medicalanalysis. Previous work was devoted to removing the speckles and acquiring satisfyingimages. According to the clinical requirements, it might be desirable to maintain locallyhigher data fidelity instead of enforcing visually appealing but rather wrong image struc-tural features. Catering to this expectation, we propose a Consistent Structural GenerativeAdversarial Network (CSGAN) to learn the clean style of low-speckle in repeated AS-OCTimages and simultaneously preserve the tiny but vital structural knowledge among the la-tent feature, spatial and frequency domains. Specifically, we design a latent constraint intothe generator to capture the inherent content in the feature domain and adopt the per-ceptual similarities to directly preserve structural detail in the spatial dimension. Besides,we introduce a focal frequency scheme that adaptively represents and distinguishes hardfrequencies to compensate for the spatial loss and refine the generated image to improveimage quality. Finally, the experimental results demonstrate that the CSGAN can achievesatisfactory despeckling results with preserving structural details on the AS-Casia dataset.Keywords: AS-OCT, image despeckling, structural consistency, GAN.1. IntroductionAnterior segment optical coherence tomography (AS-OCT), a non-invasive imaging tech-nique, is widely utilized in ophthalmology to diagnose anterior segment disorders, such asglaucoma, cataract, and corneal diseases. However, AS-OCT images inevitably suffer fromspeckle noise, which obscures subtle morphological details and impacts the clinical diagno-sis. Therefore, commercial AS-OCT scanners generally average multiple B-scans capturedrepeatedly at the same location to suppress the speckle noise. This method has two mainlimitations: first, averaging multiple B-scans requires longer scanning times, making thepatients uncomfortable. Second, involuntary movements or sample motions can lead tomotion artifacts or structures lost in the repeated low-speckle images.Although the averaged images easily lost the structural information, their clean styleis desirable. Therefore, the despeckling image task can be cast as a translation problem.Specifically, the image-to-image translation scheme attempts to learn the structure knowl-edge and context of the speckled image and incorporate the style of averaged images intothe restored images. Previous work based on generative models, including pix2pix(Isolaet al., 2017) can achieve image translation while easily omitting the tiny but vital struc-tures. To meet the requirement of the clinical practice, we incorporate some strategies intothe style learning network for preserving structural details among feature domains, spatialdimensions, and frequency space in AS-OCT image despeckling.©2023 CC-BY 4.0, Anonymous.withheld2. MethodsCSGAN architecture: Driven by the fact that the conditional adversarial networks havealready taken a significant step in the image-to-image translation, CSGAN adopts the gen-eral generator ( G) and discriminator ( D) to achieve the pix-level representations. As shownin Figure. 1, The Gconsists of encoder ( EC) and decoder ( DC) while the Ddirectly uti-lizes the PatchGAN for the classification of image pairs as real or fake based on imagepatches rather than the whole image, and it can be understood as a form of texture/styleloss from speckled/repeated images. The ECadopt 2 downsampling operations consist-ing of Convolution-Batch Normalization-ReLU layer and several residual taking the formof convolution-BatchNorm and Convolution-BatchNorm-ReLU. Then, we design a latentvariable restriction as a regularization term to offset the impact of inaccurate encoding andassist the network in capturing the underlying semantic structural knowledge. Finally, weuse the fractionally-srided Convolution layer to achieve the upsampling operation.DFTDFTDCDC ECGICGrealor fake, () lpipsRGII, ()ffl R GFF Nz RLatent z , ()NRlfczz NLatent zG : Generator D : Discriminator C : ConcatenateDFT : Discrete Fourier TransformationEC : Encoder DC : Decoder : Repeated image frequency :Generated image frequency: Noisy image : Repeated image : Generated imageNIGINIRIGFRFGFRF,,2, , , , ,,, 111( ) ( ) ( )i j i jWHlpipsR G i j R x y i j G x yi j i j xyI I I IWH===−()()112, 001( ) , ,MNffl R G R GuvF F F u v F u vMN−−===−()()112 , 001( ) , ,N R N RMNlfcxyz z z x y z x yMN−−===− log ( , ) log(1- ( , )) advR N G ND I I D I I=+advRIFigure 1: The framework of the proposed CSGAN.Objective function: The generative network demonstrate the powerful representationability in low-level image processing task while it always neglects some tiny but vital detailsfor clinical practice. To balance the speckle suppression and structure consistency, we firstexploit the latent feature constraint Llfcto capture structural semantics in the latent fea-ture maps and allow some slight variation to improve the generalization ability of network.Then, the perceptual loss Llpips(Zhang et al., 2018) is explicitly adopted to learn percep-tual image patch similarity and incorporate structural texture and minimize the structuralgap the speckled and generated image in the spatial domain. Moreover, speckle noise andstructural details, including edge features and texture grain, generally manifest as high-frequency sections in the frequency domain. Therefore, restoring intact and precise usefulfrequency components and suppressing the speckled spectrum is challenging. To solve thisbottleneck, we adopt the focal frequency scheme to enforce the network locating the hardfrequencies and utilize weighted spectrum value to strengthen the structural knowledge inthe frequency domain dynamically. Specifically, a focal frequency loss Lffl(Jiang et al.,2021) is adopted to adaptively focus the model on the frequency components that are hardto deal with but can be pivotal for ameliorating quality in the frequency space. The AS-OCT images are transformed into the frequency domain by 2D-DFT and represented usingamplitude uand phase vsuch that each frequency value can be mapped to a Euclideanvector in a two-dimensional space. Finally, like the general GAN, the primary objective2Short Titleof CSGAN also contains adversarial loss Ladvin the PatchGAN discriminators to distin-guish the classification of style or structural structure. Therefore, the CSGAN can capturethe despeckling style and preserve the tiny but vital structural details by characterizingstructural similarity among spatial domain, latent feature maps, and frequency space.3. Experimental results and analysisWe collected the AS-casia dataset from the CASIA2 ophthalmology device. The datasetprovides 184 noisy and 184 clean images by averaging 16 repeated B-scans at the sameposition. All images are views of the anterior segment (AS) structure, including the ciliarymuscle, iris, and anterior chamber angle. The visual comparison among the noisy images,repeated images, and despeckling results of the proposed CSGAN is shown in Figure. 2.We can observe that the repeated images easily get trapped in edge artifacts at the borderof the iris (see the orange arrow in the enlarged green region) and loss of detail (see theblue arrow) caused by involuntary movements during the acquisition. At the same time,the despeckling result can capture the structure knowledge in noisy images and learn thestyle of repeated images owing to the designing of structure consistency in CSGAN and thestyle understanding ability in PatchGAN-discriminator.Noisy Repeated OursOurs Repeated NoisyFigure 2: The visual comparison of despeckling result.ReferencesPhillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translationwith conditional adversarial networks. In 2017 IEEE Conference on Computer Visionand Pattern Recognition (CVPR) , Nov 2017. doi: 10.1109/cvpr.2017.632. URL http://dx.doi.org/10.1109/cvpr.2017.632 .Liming Jiang, Bo Dai, Wayne Wu, and ChenChange Loy. Focal frequency loss for imagereconstruction and synthesis, Jan 2021.Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. Theunreasonable effectiveness of deep features as a perceptual metric. In 2018 IEEE/CVFConference on Computer Vision and Pattern Recognition , Dec 2018. doi: 10.1109/cvpr.2018.00068. URL http://dx.doi.org/10.1109/cvpr.2018.00068 .3 |
OMsh_JAmyfR | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Exploring shared memory architectures for end-to-endgigapixel deep learningLucas W. Remedios∗1lucas.w.remedios@vanderbilt.edu1Vanderbilt University, Nashville, TN, USALeon Y. Cai∗1leon.y.cai@vanderbilt.eduSamuel W. Remedios2,3samuel.remedios@jhu.edu2Johns Hopkins University, Baltimore, MD, USA3National Institutes of Health, Bethesda, MD, USAKarthik Ramadass1karthik.ramadass@vanderbilt.eduAravind Krishnan1aravind.r.krishnan@vanderbilt.eduRuining Deng1r.deng@vanderbilt.eduCan Cui1can.cui.1@vanderbilt.eduShunxing Bao1shunxing.bao@vanderbilt.eduLori A. Coburn4,5lori.coburn@vumc.org4Vanderbilt University Medical Center, Nashville, TN, USA5Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN, USAYuankai Huo1yuankai.huo@vanderbilt.eduBennett A. Landman1bennett.landman@vanderbilt.eduEditors: Accepted for publication at MIDL 2023AbstractDeep learning has made great strides in medical imaging, enabled by hardware advancesin GPUs. One major constraint for the development of new models has been the satura-tion of GPU memory resources during training. This is especially true in computationalpathology, where images regularly contain more than 1 billion pixels. These pathologicalimages are traditionally divided into small patches to enable deep learning due to hardwarelimitations. In this work, we explore whether the shared GPU/CPU memory architectureon the M1 Ultra systems-on-a-chip (SoCs) recently released by Apple, Inc. may providea solution. These affordable systems (less than $5000) provide access to 128 GB of uni-fied memory (Mac Studio with M1 Ultra SoC). As a proof of concept for gigapixel deeplearning, we identified tissue from background on gigapixel areas from whole slide images(WSIs). The model was a modified U-Net (4492 parameters) leveraging large kernels andhigh stride. The M1 Ultra SoC was able to train the model directly on gigapixel images(16000 ×64000 pixels, 1.024 billion pixels) with a batch size of 1 using over 100 GB ofunified memory for the process at an average speed of 1 minute and 21 seconds per batchwith Tensorflow 2/Keras. As expected, the model converged with a high Dice score of0.989±0.005. Training up until this point took 111 hours and 24 minutes over 4940 steps.Other high RAM GPUs like the NVIDIA A100 (largest commercially accessible at 80 GB,∼$15000) are not yet widely available (in preview for select regions on Amazon Web Ser-vices at $40.96/hour as a group of 8). This study is a promising step towards WSI-wiseend-to-end deep learning with prevalent network architectures.Keywords: Gigapixel, GPU, Large Patch, Computational Pathology, Segmentation∗Contributed equally©2023 CC-BY 4.0, L.W. Remedios et al.Remedios Cai Remedios Ramadass Krishnan Deng Cui Bao Coburn Huo Landman64,00016,000Mac Studio with128 GB Unified MemoryDeep LearningBatch Size 1Traditional FOV256256Proposed FOVFigure 1: Usually, small patches (eg. 256 ×256 pixels) are used in computational pathology.Enabled by the unified memory architecture, we instead use a gigapixel area(16000 ×64000 pixels) from preprocessed images, which is larger than a traditionalsingle tissue field of view.1. IntroductionThe deep learning revolution has largely been possible due to GPU acceleration (Dean,2020). When training on very large images, GPU RAM may not be sufficient for a batchsize of 1 (Jain et al., 2020). In this case, model parallelism can be implemented. However,model parallelism requires multiple available GPUs (Yadan et al., 2013) and introducescommunication overhead (Keuper and Preundt, 2016).In computational pathology, gigapixel (1 billion pixels) images are standard (Dimitriouet al., 2019). Instead of training on gigapixel images directly, popular approaches use smallpatches, for example 256 ×256 pixels (Chen et al., 2022). Unfortunately, small patches donot provide global contextual information, thus larger patches are still desired (Chen et al.,2022). Previous work has learned from whole slide images (without patching) by trainingmodel modules separately (not end-to-end) (Zhang et al., 2022). Recently, the use of multi-scale patches, including larger patches that provide more spatial context (4096 ×4096 pixels)have been shown to be effective for learning (Chen et al., 2022).In this work, we perform end-to-end training on gigapixel images (1.024 billion pixels)without distributing the model or data across multiple GPUs. We use a batch size of 1,and a small convolutional neural network that leverages large kernels with high stride. Themodel is trained to detect tissue from background as a proof of concept. Our trainingscheme is enabled by a Mac Studio with an M1 Ultra SoC with 128 GB of unified RAM(shared CPU/GPU RAM). A visual depiction can be seen in Figure 1.2. MethodsThe data consists of 342 hematoxylin and eosin (H&E) whole slide images acquired under in-stitutional review board (IRB) approval (Vanderbilt IRBs #191738 and #191777). Briefly,256 images were used for training, 5 for validation, and 81 for testing. The images were con-verted to grayscale, color inverted, normalized 0 to 1, and cropped to 16000 ×64000 pixels.Labels were created by cropping, downsampling by 8, tiling into non-overlapping 1 ×1 tiles,thresholding tiles with mean intensity over 230 to 0, conversion to grayscale, thresholding2Exploring shared memory architectures for end-to-end gigapixel deep learningnon-zero pixels to 255, median blurring, erosion, dilation, hole filling, upsampling to theoriginal resolution, re-thresholding values above 0 to 255, and binarizing.We used a heavily modified U-Net (Ronneberger et al., 2015) with 7 convolutional layersand 4492 parameters. Skip connections used addition rather than concatenation. Earlylayers learned a downsampling with few large kernels and high stride (no pooling). Alltraining and inference was performed on a Mac Studio with M1 Ultra SoC and 128 GB ofunified memory. The model was trained with the Adam optimizer, a learning rate of 1e-3,a batch size of 1, and binary cross-entropy loss. The model weights corresponding to thelowest validation loss were selected for evaluation.3. Results & DiscussionThe average time per step (including validation) was 1 minute and 21 seconds. The AppleActivity Monitor application was used to get an accurate read on the unified memory usagefor the process. The peak unified memory consumption from the first 8 steps of trainingwas 103.61 GB. The model achieved a Dice score of 0 .989±0.005 on the testing set. Figure2 shows a qualitative assessment of model performance.TPDice: 0.982 FPFNProcessed ImageSegmentationFigure 2: Segmentations with a large amount of true positives led to high Dice scores. Thisproof of concept demonstrates learnability of gigapixel images using Apple silicon.In this proof of concept, we have shown that it is possible to perform end-to-end trainingdirectly on images of size 1.024 billion pixels, with no patching, using $5000 Apple silicon(Mac Studio with M1 Ultra SoC and 128 GB of unified memory). The peak unified memoryusage that was measured for the process was 103.61 GB. During development, this modelwas unable to run on an NVIDIA RTX A6000 (48 GB, ∼$4500). Other high RAM GPUs,such as the NVIDIA A100 (80 GB, ∼$15000) are not yet widely available (in preview forselect regions on Amazon at $40.96/hour for a group of 8). The shared GPU/CPU RAMarchitecture on the Apple M1 Ultra SoC opens the field for the design of memory-efficientmodels for gigapixel images at a reasonable price point.AcknowledgmentsThis research was supported by The Leona M. and Harry B. Helmsley Charitable Trustgrant G-1903-03793 and G- 2103-05128, NSF CAREER 1452485, NSF 2040462, NSF GRFPGrant No. DGE-1746891, NCRR Grant UL1 RR024975-01 (now at NCATS Grant 2 UL1TR000445-06), the NIDDK, VA grants I01BX004366 and I01CX002171, VUMC DigestiveDisease Research Center supported by NIH grant R01DK135597, P30DK058404, NIH grantT32GM007347, NVIDIA hardware grant, resources of ACCRE at Vanderbilt University.3Remedios Cai Remedios Ramadass Krishnan Deng Cui Bao Coburn Huo LandmanReferencesRichard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul GKrishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images viahierarchical self-supervised learning. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 16144–16155, 2022.Jeffrey Dean. 1.1 the deep learning revolution and its implications for computer architectureand chip design. In 2020 IEEE International Solid-State Circuits Conference-(ISSCC) ,pages 8–14. IEEE, 2020.Neofytos Dimitriou, Ognjen Arandjelovi ́ c, and Peter D Caie. Deep learning for whole slideimage analysis: an overview. Frontiers in medicine , 6:264, 2019.Arpan Jain, Ammar Ahmad Awan, Asmaa M Aljuhani, Jahanzeb Maqbool Hashmi,Quentin G Anthony, Hari Subramoni, Dhableswar K Panda, Raghu Machiraju, and AnilParwani. Gems: Gpu-enabled memory-aware model-parallelism system for distributeddnn training. In SC20: International Conference for High Performance Computing, Net-working, Storage and Analysis , pages 1–15. IEEE, 2020.Janis Keuper and Franz-Josef Preundt. Distributed training of deep neural networks: The-oretical and practical limits of parallel scalability. In 2016 2nd Workshop on MachineLearning in HPC Environments (MLHPC) , pages 19–26. IEEE, 2016.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October5-9, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.Omry Yadan, Keith Adams, Yaniv Taigman, and Marc’Aurelio Ranzato. Multi-gpu trainingof convnets. arXiv preprint arXiv:1312.5853 , 2013.Jingwei Zhang, Xin Zhang, Ke Ma, Rajarsi Gupta, Joel Saltz, Maria Vakalopoulou, andDimitris Samaras. Gigapixel whole-slide images classification using locally supervisedlearning. In Medical Image Computing and Computer Assisted Intervention–MICCAI2022: 25th International Conference, Singapore, September 18–22, 2022, Proceedings,Part II , pages 192–201. Springer, 2022.4 |
PuLDnZVTH_f | Medical Imaging with Deep LearningVirtual staining overlay enabled combined morphologicaland spatial transcriptomic analysis of individual malignant Bcells and local tumor microenvironmentsZihang Fang1zfang@pictorlabs.ai1Pictor Labs Inc, Los Angeles, CARaymond Kozikowski1raymond.kozikowski@pictorlabs.aiKevin de Haan1kevin@pictorlabs.aiSerge Alexanian1s.alexanian@pictorlabs.aiMichael E. Kallen2MKallen@som.umaryland.edu2Department of Pathology, University of Maryland School of Medicine, Baltimore, MDAlyssa Rosenbloom3arosenbloom@nanostring.com3Nanostring®Technologies, Seattle, WACharlie Glaser3cglaser@nanostring.comMark Conner3Mconner@nanostring.comYan Liang3yliang@nanostring.comKyla Teplitz3kteplitz@nanostring.comJoachim Schmid3jschmid@nanostring.comJaemyeong Jung3jjung@nanostring.comYair Rivenson1rivenson@pictorlabs.aiAbstractB-cell lymphomas are complex entities consisting of a component of malignant B-cellsadmixed in a local tumor microenvironment (TME) inflammatory milieu. Discrete char-acterization of both compartments can drive deeper understanding of pathophysiology,allowing more accurate diagnoses and prognostic predictions. However, limitations in bothpathologist time and input tissue to generate multiple stains can greatly limit accurateidentification of minute, cellular-level regions of interest necessary to achieve the full po-tential of spatial biology. Here, we present a novel method to perform precise samplingof cells for transcriptomic analysis using virtual staining of autofluorescence images viadeep learning algorithms. We validated the performance of the model on regions of inter-est (ROIs) identified on chemically stained images by board certified pathologists againstvirtually stained images. The results confirmed the usability and accuracy of the workflowand identified distinct transcriptomic profiles across a range of virtually identified ROIs,raising the possibility of our workflow’s applicability across a broader range of pathologiesand tissue types.Keywords: Deep learning, Histology, Digital pathology.1. IntroductionB-cell lymphomas are heterogeneous diseases with respect to gene expression and tumor mi-croenvironment, with some entities containing a minority of malignant B-cells in a largelyinflammatory background. These rare malignant cells have unique interactions with thelocal tumor microenvironments (TME) which may drive clinical behavior and treatment re-sponsiveness. Accurate elucidation of such interactions requires sampling of both malignant©CC-BY 4.0, Z. Fang et al.Fang et al.cells and background TME for ancillary studies such as transcriptomic analysis, though ex-isting/standard technologies cannot achieve high precision, complete spatial alignment withpreservation of architectural and structural context.Previously (Rivenson et al., 2019), it has been demonstrated that a fully supervised deeplearning technique can be used to virtually stain autofluorescence images of unlabelled tissueinto various stain combinations using deep learning. Here we applied the same techniqueand generated virtually-stained H&E images based on the autofluorescence images of thesample tissue slides. Using this novel computational technique, board certified pathologistsreviewed perfectly spatially aligned virtual stains to precisely identify and annotate spe-cific populations at the cellular-level; these areas were then seamlessly incorporated intoNanostring’s GeoMx®Digital Spatial Profiler (Zollinger et al., 2020) platform allowinganalysis at hitherto unrealizable levels of resolution. Analysis of the ROIs revealed distincttranscriptional profiles between areas enriched in Reed-Sternberg cells and the associatedinflammatory milieu demonstrating the viability and utility of virtual H&E staining tech-nology as a part of spatial genetic workflow.2. Methods58 unstained sections cut at 4 microns were prepared from a total of 12 Formalin-FixedParaffin-Embedded (FFPE) tissue blocks. These unstained slides were first scanned intoautofluorescence images using four fluorescent filter cubes (BGYR) on the GeoMx®DSPplatform. 17 were then deparaffinized and chemically stained with H&E whereas the other41 underwent an additional mock GeoMx®CTA assay before chemical H&E staining. Thebrightfield whole slide images (WSIs) of all 58 chemically stained H&E slides were cap-tured using a slide scanner microscope (AxioScan Z1, Zeiss) at 20x. For each tissue slide,a multi-stage registration was performed to match the brightfield (BF) WSI to its autoflu-orescence (AF) counterpart at subpixel level, enabling a supervised training approach. Inorder to learn an accurate transformation from 4-channel autofluorescence images to theircorresponding brightfield H&E stained images, a conditional GAN-based (Goodfellow et al.,2014) model with an L1 loss was utilized, with a U-net architecture (Ronneberger et al.,2015) for the generator. The training framework is shown in Figure 1.Figure 1: Training framework of the virtual stain neural network.2Targeted morphological analysis using virtual staining3. ResultsOnce the virtual H&E model was trained and validated, it was used to virtually stainautofluorescence images of unlabeled tissue from Classic Hodgkin Lymphoma (CHL) andnormal/reactive lymphoid tissue not previously used for training. This virtual H&E wasimported into the GeoMx®DSP platform and aligned such that any ROIs created bypathologists based on the virtual H&E would directly map to matched tissue coordinates.These selected regions were then targeted for spatially precise transcriptomic analysis. Inthis study, pathologists were able to identify and create ROIs for areas enriched in ReedSternberg cells, the malignant B-cells in CHL, and separate TME regions enriched for theinflammatory milieu on the virtual H&E. Additional analysis showed clear differences intranscriptional profiles between areas as a function of the ratio of Reed Sternberg cells vsthe inflammatory milieu in CHL (see Figure 2). The unstained tissue slides which werepreviously virtually stained were later histochemically stained by H&E for side-by-sidecomparison by board certified pathologists. This visual assessment further validated ourvirtual staining model as all selected ROIs from the virtual H&E were correctly identifiedand confirmed on the histochemical H&E slides.Figure 2: Volcano plot resulting from the differential expression analysis.4. ConclusionThe ability to produce and review virtually stained H&E slides embedded within a down-stream spatial transcriptomics pipeline greatly improves the speed, accuracy, and function-ality of this complex workflow. Compared with existing processes, in which downstreamanalytics such as transcriptomics are deployed without careful architectural and structuralcontext, inserting a virtual H&E annotation step enhances the ability of users to precisely de-fine areas of interest and avoid off-target analytics. This seamless hybrid workflow increasesthe relevance of output transcriptomics analysis, provides accurate, real-time segmentationof different constituents of a mixed malignant/TME lesion, and preserves additional tissue.Our analysis demonstrates that the virtually stained images are concordant with chemicallystained H&E slides and can be used for QC, ROI identification, and other downstream anal-yses. As more virtual staining models are developed, these novel capabilities will furtherimprove the ability to precisely segment the original scanned image, unlocking increasinglarger amounts of data from diminishing input tissues.3Fang et al.ReferencesIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sher-jil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. InZ. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors,Advances in Neural Information Processing Systems , volume 27. Curran Associates,Inc., 2014. URL https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf .Yair Rivenson, Hongda Wang, Zhensong Wei, Kevin de Haan, Yibo Zhang, Yichen Wu,Harun G ̈ unaydın, Jonathan E. Zuckerman, Thomas Chong, Anthony E. Sisk, LindseyWestbrook, William D. Wallace, and Aydogan Ozcan. Virtual histological staining of un-labelled tissue-autofluorescence images via deep learning. Nature Biomedical Engineering ,3:466 – 477, 2019.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells,and Alejandro F. Frangi, editors, Medical Image Computing and Computer-Assisted Inter-vention – MICCAI 2015 , pages 234–241, Cham, 2015. Springer International Publishing.ISBN 978-3-319-24574-4.Daniel R. Zollinger, Stan E. Lingle, Kristina Sorg, Joseph M. Beechem, and Christopher R.Merritt. GeoMx TMRNA Assay: High Multiplex, Digital, Spatial Analysis of RNA in FFPETissue , pages 331–345. Springer US, New York, NY, 2020. ISBN 978-1-0716-0623-0. doi:10.1007/978-1-0716-0623-0 21. URL https://doi.org/10.1007/978-1-0716-0623-0_21.4 |
3ndjE9eawkr | Medical Imaging with Deep Learning 2023Exploring the Optimal Operating MR Contrast for BrainVentricle ParcellationSavannah P. Hays1shays6@jhu.edu1Department of Electrical and Computer Engineering, Johns Hopkins University, USALianrui Zuo1,2lrzuo@jhu.edu2Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health,USAYuli Wang3ywang687@jhmi.edu3Department of Biomedical Engineering, Johns Hopkins School of Medicine, USAMark G. Luciano4markluciano@jhu.edu4Department of Neurosurgery, Johns Hopkins School of Medicine, USAAaron Carass1aaron carass@jhu.eduJerry L. Prince1prince@jhu.eduAbstractRecent development in magnetic resonance (MR) harmonization has facilitated the synthesisof varying MR image contrasts while preserving the underlying anatomical structures. Thisenables an investigation into the impact of different T1-weighted (T1-w) MR image contrastson the performance of deep learning-based algorithms, allowing the identification of optimalMR image contrasts for pretrained algorithms. In this study, we employ image harmonizationto examine the influence of diverse T1-w MR image contrasts on the state-of-the-art ventricleparcellation algorithm, VParNet. Our results reveal the existence of an optimal operatingcontrast (OOC) for VParNet ventricle parcellation, achieved by synthesizing T1-w MRimages with a range of contrasts. The OOC for VParNet is not of the same MR imagecontrast of any of the training data. Experiments conducted on healthy subjects andpost-surgical NPH patients demonstrate that adjusting the MR image contrast to the OOCsignificantly enhances the performance of a pretrained VParNet, thereby improving itsclinical applicability.Keywords: MR imaging, Harmonization, Ventricles1. IntroductionVParNet (Shao et al., 2018, 2019) is a deep learning based ventricle segmentation algorithm,demonstrating state-of-the-art performance on both healthy and normal pressure hydro-cephalus (NPH) subjects. However, the development and evaluations of VParNet has onlyfocused on a healthy (Marcus et al., 2007) and an NPH (Shao et al., 2019) cohort. This limitsits application in clinics due to the inherent variability in magnetic resonance (MR) imagecontrast. As we show in Sec. 3, VParNet can fail on images acquired from cohorts beyondits training scope, highlighting the challenges of domain shift. In this paper, we propose aframework to improve the applicability and accuracy of VParNet using MR harmonizationwithout retraining the parcellation model . Our framework is based on a recently proposedMR harmonization method (Zuo et al., 2021a,b), which synthesizes MR images with different©2023 CC-BY 4.0, Savannah P. Hays et al.Savannah P. Hays et al.Figure 1: DSC based heatmap showing theOOCs AandBof VParNet iden-tified by a grid search of theCALAMITI contrast space. Thegray area in the upper-right handcorner, is a region of CALAMITIcontrast space associated with T 2-weighted MRIs and was not ex-plored in our grid search of T 1-weighted OOCs for VParNet.contrasts while perserving the underlying anatomy. Previous work by Hays et al. (Hayset al., 2022) has demonstrated the potential use of harmonization to evaluate the impact ofT1-w MR image contrast on whole brain segmentation. Following (Hays et al., 2022), wefirst use image harmonization to quantitatively estimate the operating contrasts of VParNet.Furthermore, we demonstrate there exists an optimal operating contrast (OOC) for VParNet.After adjusting image contrast to the identified OOC, we show that the performance ofa pretrained VParNet can be further boosted even on the same healthy cohort it wastrained on. Experiments on NPH cohorts with diverse MR image contrasts demonstrate thegeneralizability of VParNet’s performance when using the OOC.2. MethodsUsing CALAMITI (Zuo et al., 2021a,b), we are able to generate synthetic MR imageswith the same underlying anatomy, β, and an arbitrary contrast, θ. Thirty-five T 1-w MRimages of healthy subjects were first encoded into the CALAMITI contrast and anatomyspace. These 35 images (which were not used in training VParNet) were acquired on a1.5T Siemens scanner, and manually delineated (including the ventricles) by experts fromNeuromorphometrics Inc. (NMM) (Marcus et al., 2007). We observe that all of these imageshave MR image contrast within the broad range of CALAMITI’s training data. In orderto find the OOC for VParNet, we considered the MR image contrasts within CALAMITItraining data, which is represented by the blue rectangle shown in Fig. 1. Dividing thisrectangle into a 10 ×10 grid, we rejected 21 locations with non-T 1-w MR image contrasts,leaving 79 candidates for the OOC. We then combined each candidate (target) contrastwith the encoded anatomical representations ( β-values) of eight NMM subjects to generatesynthetic MR images. In total there are 632 synthetic images (8 subjects times 79 candidatecontrasts) generated by CALAMITI. These synthetic images and the corresponding ventricledelineations were used to quantitatively evaluate the performance of VParNet on the 79candidate contrasts. The average Dice similarity coefficient (DSC) of VParNet on the eightsample subjects of each candidate contrast is shown in Fig. 1. We identified the OOCs ofVParNet (labeled as AandB) by selecting the candidate contrast with the highest meanDSC. Interestingly, the OOCs represent different MR image contrasts than VParNet trainingdata in CALAMITI contrast space.2Optimal Operating MR ContrastFigure 2: Example images PS-NPH subjects.The original PS-NPH images havea surgical artifact that is misla-beled as part of the ventricle sys-tem by VParNet, while the imagesharmonized to an OOC have nosuch mislabeling.3. Experiments and ResultsWe adjust the contrast of the remaining 35 subjects from the NMM dataset that were notincluded in VParNet training to the two selected OOCs. There was a significant ( p <0.01)improvement between VParNet performance on the original and harmonized images forthe 4th ventricle, left lateral ventricle, right lateral ventricle, and whole ventricle. TheVParNet DSC for the whole ventricle on the original MR images was 0.8859 ±0.039. Afterharmonization, the DSC improved to 0.8932 ±0.031 and 0.8947 ±0.032 using OOCs AandB, respectively. This result demonstrates the ability of the OOC to enhance VParNet’sperformance even on the original training cohort. This boost motivated us to explorethe impact of image harmonization on datasets beyond VParNet’s training scope. Ourpost-surgery NPH (PS-NPH) dataset consists of six patients with corresponding manualdelineations. These patients underwent brain surgery to remove excess CSF from their ven-tricles. PS-NPH MR images often contain artifacts hindering the performance of automaticparcellation methods. In many cases, these artifacts led to mislabelling during automatedimage processing. The T 1-w MR images of the six patients were acquired at different clinicalcenters, distinct from the cohorts where VParNet was trained. We used image harmonizationto adjust the contrast of the PS-NPH T 1-w MR images to the previously identified OOCs(Targets AandB) and observed improved performance of VParNet. Figure 2 shows theresults for two post-surgery NPH patients. VParNet’s performance on the original images isconsiderably impacted by the diverse MR image contrast and surgery artifacts, causing theartifact to be mislabeled as part of the ventricular system. After adjusting the images tothe OOCs, VParNet shows improved performance. The 95% HD improves from 39.22 to7.59 for subject (a) in Figure 2. The improvement for subject (b) is more subtle.4. Discussion and ConclusionIn this paper, we have demonstrated the improved performance of VParNet for both healthyand NPH patients by adjusting the input image contrast using harmonization, withoutretraining VParNet. We successfully identified two OOCs of a pretrained VParNet witha grid search in CALAMITI contrast space. After contrast adjustment, we demonstrateimproved VParNet performance on data from the same cohort as VParNet training anddata from different cohorts. For the PS-NPH subjects, VParNet showed improved DSC and95% HD after contrast adjustment; however, mislabeling due to the post-surgery artifactspersists. The OOCs AandBare specifically for the whole ventricle label and may differ ifthe evaluation focuses on a particular ventricle.3Savannah P. Hays et al.AcknowledgmentsThis work was supported in part by the NIH / NINDS under grant U01-NS122764 (PI:M.G. Luciano) and in part by the Intramural Research Program of the NIH, NationalInstitute on Aging. Portions of the used data in this study was conducted retrospectivelyusing human subject data made available by Neuromorphometrics Inc. Ethical approvalwas not required as confirmed by the license attached with this data. The remainder of thedata was acquired in line with the principles of the Declaration of Helsinki. Approval wasgranted by an IRB Committee of the Johns Hopkins School of Medicine with approval IDIRB00305245 (approved January 13, 2022).ReferencesS. Hays, L. Zuo, A. Carass, and J. L. Prince. Evaluating the impact of MR image contraston whole brain segmentation. In Proc. of SPIE Vol , volume 12032, pages 120320I–1, 2022.D. S. Marcus, T. H. Wang, J. Parker, J. G. Csernansky, J. C. Morris, and R. L. Buckner.Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middleaged, nondemented, and demented older adults. Journal of Cognitive Neuroscience , 19(9):1498–1507, 2007.M. Shao, A. Carass, X. Li, B. E. Dewey, A. M. Blitz, J. L. Prince, and L. M. Ellingsen.Multi-atlas segmentation of the hydrocephalus brain using an adaptive ventricle atlas. InProceedings of SPIE Medical Imaging (SPIE-MI 2018), Houston, TX, February 10 – 15,2018, volume 10578, pages 105780F–105780F–7, 2018.M. Shao, S. Han, A. Carass, X. Li, A. M. Blitz, J. Shin, J. L. Prince, and L. M. Ellingsen.Brain ventricle parcellation using a deep neural network: Application to patients withventriculomegaly. NeuroImage: Clinical , 23:101871, 2019.L. Zuo, B. E. Dewey, A. Carass, Y. Liu, Y. He, P. A. Calabresi, and J. L. Prince. Information-based disentangled representation learning for unsupervised mr harmonization. In In-ternational Conference on Information Processing in Medical Imaging , pages 346–359.Springer, 2021a.L. Zuo, B. E. Dewey, Y. Liu, Y. He, S. Newsome, E. M. Mowry, S. M. Resnick, J. L. Prince,and A. Carass. Unsupervised mr harmonization by learning disentangled representationsusing information bottleneck theory. NeuroImage , 243:118569, 2021b.4 |
cAVR6QftKDe | Medical Imaging with Deep Learning 2023Distributed learning effectiveness in medical image analysiswhen trained with limited datasetsRaissa Souza1,2raissa.souzadeandrad@ucalgary.caPauline Mouches1,2pauline.mouches@ucalgary.caMatthias Wilms1,2,3matthias.wilms@ucalgary.caAnup Tuladhar1,2anup.tuladhar@ucalgary.caS ̈ onke Langner4soenke.langner@med.uni-rostock.deNils D. Forkert1,2,3,5nils.forkert@ucalgary.ca1Department of Radiology, University of Calgary, Calgary, Canada2Hotchkiss Brain Institute, University of Calgary, Calgary, Canada3Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, Canada4Institute for Diagnostic Radiology and Neuroradiology, Rostock University Medical Center, Ger-many5Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Cal-gary, CanadaAbstractFederated learning (FL) is a cutting-edge method for distributed learning used in manyfields, including healthcare. However, medical centers need sufficient local data to trainlocal models and participate in an FL network, which is often not feasible for rare and pedi-atric diseases or small hospitals with limited patient data. As a result, these centers cannotdirectly contribute to FL model development. To address this issue, this work explores theeffectiveness of a different approach called the travelling model (TM). Specifically, this workevaluates the performances of FL and TM when only very small sample sizes are availableat each center. Brain age prediction was used as an example case for comparison in thiswork. Our results indicate that the TM outperforms FL across all sample sizes tested,particularly when each center has only one sample.Keywords: Federated Learning, Travelling Model, Distributed Learning.1. IntroductionThe main technique used today for implementing distributed machine learning is federatedlearning (FL) (McMahan et al., 2016), where multiple models are iteratively trained inparallel at each center and are periodically aggregated at a central server. However, FL’sperformance is known to depend on the size of the datasets available at each center andoften performs poorly when the dataset size is very small. In that case, sequentially traininga single model at medical centers (known as travelling model (TM) (Souza et al., 2022a))may be more suitable, as the single model iteratively sees all datasets. This study aimsto compare FL and TM approaches for cases where centers can only provide very smallsamples, such as rare diseases.To test and systematically evaluate those approaches in a medical image analysis setup,we use brain age prediction based on magnetic resonance imaging (MRI) data as an example©2023 CC-BY 4.0, R. Souza, P. Mouches, M. Wilms, A. Tuladhar, S. Langner & N.D. Forkert.Souza Mouches Wilms Tuladhar Langner Forkertcase. Although such data is widely and freely available, we chose this application scenariobecause the dataset used was acquired in a controlled way, which allows us to focus onlyon the effect of sample size, eliminating biases during model evaluation. Although brainage prediction is not a raree disease, we believe that the findings from this study will beapplicable to other problems with limited data.2. Material and Methods2.1. DatasetThe data used in this study consists of morphological brain features extracted using Fast-surfer (Henschel et al., 2020) from a subset of the Study of Health in Pomerania (SHIP)(Volzke et al., 2011) consisting of 2025 cross-sectional T1-weighted MRI brain scans of pre-dominantly healthy adults aged between 21 and 82 years (mean: 50 ±13 years). Trainingsets were divided into subsets to represent independent centers. These subsets varied in thenumber of samples per center, ranging from large to very small, specifically 20, 10, 5, 2,and 1 were used. To prevent selection-biased conclusions from being drawn from a singledata split, ten Monte-Carlo cross-validation iterations were performed.2.2. Models and EvaluationA multi-layer perceptron was used for brain age prediction based on the extracted FreeSurferfeatures, consisting of two hidden layers with 256 and 128 neurons using ReLU activations,respectively, and an output layer with one neuron using a linear activation.To ensure a fair comparison, all models (central model, federated learning model, andtravelling model) were initialized with the same random weights and trained using the Adamoptimizer with an initial learning rate of 0.01 for a total of 200 epochs. After the first tenepochs, an exponential learning rate decay of -0.1 was applied for every subsequent epoch.For the FL model, evaluations were conducted using 20, 40, and 200 rounds, with eachlocal model trained for ten, five, and one epoch(s), respectively, totaling 200 epochs in allcases. The TM was evaluated with the same number of rounds and epochs per round, andcenters were visited in the same random order for all cycles to avoid any bias from varyingtravel orders.The mean absolute error (MAE) was used for quantitative evaluation, as is standard inbrain age prediction literature (Nam et al., 2020). This was calculated by computing theabsolute difference between the predicted brain age and the known chronological age, withlower MAE values indicating better prediction performance.3. ResultsThe central learning model produced the best MAE of 5.99 years, which falls within therange of previously published models that used similar tabulated data (Nam et al., 2020).Figure 1 shows the average MAE across ten Monte-Carlo cross-validation iterations forthe 18 distributed learning scenarios analyzed. These scenarios varied in the number ofsamples per site (1, 2, 5, 10, and 20) and the number of training rounds (20, 40, and 200).2Distributed learning effectiveness with limited datasetsMean absolute errorSample sizes per siteModels+-FL TM20 10 5 2 16.42 6.116.63 6.677.00 6.618.55 6.6418.89 6.78678910FL TM20 10 5 2 17.02 5.927.30 5.877.57 5.968.88 5.8718.92 6.06678910FL TM20 10 5 2 117.97 5.9618.08 5.9618.23 5.9818.47 5.8619.13 5.81678910FL TM20 10 5 2 17.02 5.927.30 5.877.57 5.968.88 5.8718.92 6.06678910A. 10 epochs per round B. 5 epochs per round C. 1 epoch per round 20 rounds 40 rounds 200 roundsFigure 1: Federated learning model (FL) and travelling model (TM) performance acrossdifferent sample sizes and rounds. To ensure consistency, the models were trainedby seeing the entire dataset 200 times, with the number of epochs per site beingcontrolled.Figure 1 demonstrates that the FL model’s performance declined as the sample sizedecreased, regardless of the number of rounds. For example, for both 20 and 40 rounds,the experiment with a single subject per center resulted in the worst performance (MAE ofapproximately 18.9 years). In contrast, the performance of the TM was relatively unaffectedby small sample sizes (MAE of 6.78 years for 20 rounds and MAE of 6.06 years for 40rounds with a single subject per center). The travelling model consistently produced resultscomparable to the central learning model (MAE of 5.99 years) throughout the experiments.The results also revealed that both the distributed learning models’ ability to learnmeaningful relationships from the data was influenced by the number of epochs the modelwas trained per round. Figure 1A-C illustrates how the FL model’s performance deterio-rated as the number of epochs trained per round decreased. The model produced highererrors when trained for fewer epochs per round. In contrast, the TM’s performance im-proved when the model was trained at each center for fewer epochs per round. Althoughthe effect was less pronounced, it was statistically significant (two-tailed paired t-test p <0.009).4. ConclusionThis work evaluated the effectiveness of travelling models compared to federated learningmodels for medical imaging analysis with very small sample sizes. Our findings indicate thattravelling models perform better than federated learning models, regardless of the samplesize. Additionally, it was found that the travelling model can achieve similar results tocentral learning even with only one epoch of training at each site. As a result, travellingmodels may be a more suitable option for distributed learning with limited local datasets,creating new opportunities for applying machine learning models in rare diseases, pediatricresearch, and small hospitals contributing to distributed learning setups1.1. This short paper discusses the essentials of the work published in (Souza et al., 2022b)3Souza Mouches Wilms Tuladhar Langner ForkertReferencesLeonie Henschel, Sailesh Conjeti, Santiago Estrada, Kersten Diers, Bruce Fischl, and MartinReuter. Fastsurfer - a fast and accurate deep learning based neuroimaging pipeline. Neu-roImage , 219:117012, 10 2020. ISSN 10538119. doi: 10.1016/j.neuroimage.2020.117012.URL https://linkinghub.elsevier.com/retrieve/pii/S1053811920304985 .H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Ag ̈ ueray Arcas. Communication-efficient learning of deep networks from decentralized data.Proceedings of the 20th International Conference on Artificial Intelligence and Statistics,AISTATS 2017 , 2 2016. URL http://arxiv.org/abs/1602.05629 .Yoonho Nam, Jinhee Jang, Hea Yon Lee, Yangsean Choi, Na Young Shin, Kang-HyunRyu, Dong Hyun Kim, So-Lyung Jung, Kook jin Ahn, and Bum soo Kim. Estimatingage-related changes in in vivo cerebral magnetic resonance angiography using convo-lutional neural network. Neurobiology of Aging , 87:125–131, 3 2020. ISSN 01974580.doi: 10.1016/j.neurobiolaging.2019.12.008. URL https://linkinghub.elsevier.com/retrieve/pii/S0197458019304361 .Raissa Souza, Agampreet Aulakh, Pauline Mouches, Anup Tuladhar, Matthias Wilms,S ̈ onke Langner, and Nils D. Forkert. A comparative analysis of the impact of data dis-tribution on distributed learning with a traveling model for brain age prediction. volume12037, page 1. SPIE, 4 2022a. ISBN 9781510649491. doi: 10.1117/12.2612728. URLhttps://www.spiedigitallibrary.org/conference-proceedings-of-spie/12037/1203702/A-comparative-analysis-of-the-impact-of-data-distribution-on/10.1117/12.2612728.fullhttps://www.spiedigitallibrary.org/conference-proceedings-of-spie/12037/1203702/A-comparat .Raissa Souza, Pauline Mouches, Matthias Wilms, Anup Tuladhar, S ̈ onke Langner, andNils D Forkert. An analysis of the effects of limited training data in distributed learningscenarios for brain age prediction. Journal of the American Medical Informatics Asso-ciation , 10 2022b. ISSN 1067-5027. doi: 10.1093/JAMIA/OCAC204. URL https://academic.oup.com/jamia/advance-article/doi/10.1093/jamia/ocac204/6775041 .H. Volzke, D. Alte, C. O. Schmidt, D. Radke, R. Lorbeer, N. Friedrich, N. Aumann,K. Lau, M. Piontek, G. Born, C. Havemann, T. Ittermann, S. Schipf, R. Haring, S. E.Baumeister, H. Wallaschofski, M. Nauck, S. Frick, A. Arnold, M. Junger, J. Mayerle,M. Kraft, M. M. Lerch, M. Dorr, T. Reffelmann, K. Empen, S. B. Felix, A. Obst,B. Koch, S. Glaser, R. Ewert, I. Fietze, T. Penzel, M. Doren, W. Rathmann, J. Haert-ing, M. Hannemann, J. Ropcke, U. Schminke, C. Jurgens, F. Tost, R. Rettig, J. A.Kors, S. Ungerer, K. Hegenscheid, J.-P. Kuhn, J. Kuhn, N. Hosten, R. Puls, J. Henke,O. Gloger, A. Teumer, G. Homuth, U. Volker, C. Schwahn, B. Holtfreter, I. Polzer,T. Kohlmann, H. J. Grabe, D. Rosskopf, H. K. Kroemer, T. Kocher, R. Biffar, U. John,and W. Hoffmann. Cohort profile: The study of health in pomerania. InternationalJournal of Epidemiology , 40:294–307, 4 2011. ISSN 0300-5771. doi: 10.1093/ije/dyp394.URL https://academic.oup.com/ije/article-lookup/doi/10.1093/ije/dyp394 .4 |
yCx-3_76pY | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submission3D Body Composition Segmentation in Abdomen and PelvisCT using Subdivided Labels and Random PatchMinyoung Kim1,2kmy8381@ewhain.net1Division of Mechanical and Biomedical Engineering, Ewha Womans University, Seoul, Korea2Graduate Program in Smart Factory, Ewha Womans University, Seoul, KoreaJi-Won Kwon3kwonjjanng@yuhs.ac3Department of Orthopedic Surgery, Yonsei University College of Medicine, Seoul, KoreaKwang Suk Lee4CALMENOW@yuhs.ac4Department of Urology, Yonsei University College of Medicine, Seoul, KoreaTaehoon Shin1,2taehoons@ewha.ac.krEditors: Under Review for MIDL 2023AbstractThe distribution and volume of fat and muscle in APCT play an important role as abiomarker. In this study, APCT data from 200 individuals who underwent health screeningwas labeled into three classes of fat and four classes of muscle. Based on this labeling,3D patch-wise segmentation was performed by Swin UNETR on the whole abdomen andpelvic scan images. The test results showed an overall class average of 0.9227 DSC. Thisstudy conducted 3D whole-abdomen body composition segmentation using a total of eightsegmented body composition labels including the background and verified its feasibilityusing random patches effective for the data and task.Keywords: random patch, 3D, body composition, segmentation, APCT, Swin UNETR1. IntroductionInformation on body composition distribution is useful for diagnosing various diseases. Par-ticularly, the analysis of the proportion and characteristics of fat and muscle is importantfor the diagnosis of diabetes and sarcopenia. Since manually assessing the distribution ofbody composition requires a significant amount of labor, numerous deep learning methodshave been developed for automated segmentation. Previous body tissue segmentation mod-els used 2D images or small-slab 3D volumes of abdominal-pelvic computer tomography(APCT) to reduce the computational load and labeling burden.(Koitka et al., 2021; Westonet al., 2019) In this study, we aimed to develop a 3D random patch-wise deep learning modelfor segmenting the entire 3D APCT into eight detailed types of fat and muscle tissues.2. Materials and Methods200 patients who visited the Gangnam Severance Hospital health promotion center betweenMarch 2021 and April 2021 and underwent APCT were selected. Segmentation classes weredivided in detail into subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT),external adipose tissue located outside the abdominal cavity (EAT), psoas muscles (PM),erector spinae muscles (ESM), multifidus muscles (MFM), remaining skeletal muscle classes©2023 CC-BY 4.0, M. Kim, J.-W. Kwon, K.S. Lee & T. Shin.Kim Kwon Lee Shinexcept for the above muscle classes (SM), and background, which belongs to none of theaforementioned classes. For annotation, the ITK Snap software(Yushkevich et al., 2006)was used and two orthopaedic and urologic surgeons with minimal 10 years of experiencein the field participated in the labeling process. 180 and 20 subjects were used for trainingand testing, respectively. Data pre-processing included resampling to isotropic resolutionof 1.5mm, background cropping, intensity normalization, and clamping between -300 and500 in Hounsfield unit (HU).Figure 1: Overall Swin UNETR structure. Patch merging by 2 and the number of the headsare 3,6,12,24Swin Unet Transformers (Swin UNETR)(Hatamizadeh et al., 2022) was used for 3Dsegmentation of the eight types of body tissues (including the background) (Fig. 1). SwinUNETR is a U-shaped model that combines an encoder with a Shifted window (Swin)Transformer architecture(Liu et al., 2021) and a decoder based on convolutional layerswith skip connections. The Swin Transformer operates on self-attention within shiftedwindows of hierarchically varying resolutions, which decreases computational complexityand improves generalization performance. By employing a Swin transformer in the encoder,Swin UNETR can take advantage of hierarchical information during the upsampling processin the decoder. At each epoch of training, the model received randomly sampled 96x96x96patches from 3D APCT images. This reduced the computational load and simplified thecomplexity of the whole abdominal pelvic image into patch units, allowing model to easilylearn the features of each patch. Primary training parameters were batch size = 4, AdamWoptimizer and Step scheduler with an initial learning rate of 0.001, step size of 400, andgamma of 0.5. Since the number of voxels substantially differs over the eight segmentationclasses, we applied a hybrid loss that combines dice loss and cross entropy loss to addressclass imbalance(Taghanaki et al., 2019). For inference, we input grid-sampled patches fromthe test data into the model and combined the outputs to create a single segmentation forthe entire trunk image. Accuracy was evaluated based on the Dice Similarity Coefficient(DSC).3. Experiments and ResultsTable 1 summarizes the DSC values obtained from the test dataset. On average acrossseven classes excluding the background, the patch size of 96 achieved the best performance23D body composition segmentation in APCTTable 1: Evaluation of Dice coefficient on the test set (P=patch size).Class BG SAT SM PM VAT EAT ESM MFM AvgP=128 0.9874 0.9637 0.9323 0.9007 0.8685 0.7834 0.8788 0.8759 0.9014P=96 0.9879 0.9635 0.9347 0.8988 0.8667 0.7815 0.8746 0.8689 0.9227P=64 0.9868 0.9598 0.9339 0.8998 0.8399 0.7763 0.8691 0.8591 0.9180Figure 2: Axial and sagittal view images of the test subjects. SAT is indicated by red, VATby yellow, EAT by light blue, SM by green, PM by blue, ESM by purple, andMFM by beige.with average DSC of 0.9227. Figure 2 shows a representative set of raw APCT images,ground truth segmentation images, and segmentation images predicted from the model forthe axial and sagittal views. VAT-EAT and MFM-ESM classes appear to be challengingdue to adjacent tissues with the same HU values. The prediction results also show thatthe model has smoothed out uneven edges and discontinuities in the ground truth segmentsmade during the labeling process.4. ConclusionWe developed a deep learning models for automated segmentation of 3D whole abdomeninto eight detailed fat and muscle regions. The proposed method used a Swin UNETR as abackbone with random patch-wise data augmentation. Test results showed an average DSCof 0.9227 over all eight classes, demonstrating the feasibility of the proposed approach.AcknowledgementThis research was supported by Digital Healthcare Research Grant through the SeokchunCaritas Foundation (No. SCY2208P), and the Institute of Information communicationsTechnology Planning Evaluation (IITP) grant (No. RS-2022-00155966).3Kim Kwon Lee ShinReferencesAli Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger R Roth, and DaguangXu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mriimages. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain In-juries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part I , pages 272–284.Springer, 2022.Sven Koitka, Lennard Kroll, Eugen Malamutmann, Arzu Oezcelik, and Felix Nensa. Fullyautomated body composition analysis in routine ct imaging using 3d semantic segmenta-tion convolutional neural networks. European radiology , 31:1795–1804, 2021.Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, andBaining Guo. Swin transformer: Hierarchical vision transformer using shifted windows.InProceedings of the IEEE/CVF international conference on computer vision , pages10012–10022, 2021.Saeid Asgari Taghanaki, Yefeng Zheng, S Kevin Zhou, Bogdan Georgescu, Puneet Sharma,Daguang Xu, Dorin Comaniciu, and Ghassan Hamarneh. Combo loss: Handling inputand output imbalance in multi-organ segmentation. Computerized Medical Imaging andGraphics , 75:24–33, 2019.Alexander D Weston, Panagiotis Korfiatis, Timothy L Kline, Kenneth A Philbrick, PetroKostandy, Tomas Sakinis, Motokazu Sugimoto, Naoki Takahashi, and Bradley J Erickson.Automated abdominal segmentation of ct scans for body composition analysis using deeplearning. Radiology , 290(3):669–679, 2019.Paul A. Yushkevich, Joseph Piven, Heather Cody Hazlett, Rachel Gimpel Smith, SeanHo, James C. Gee, and Guido Gerig. User-guided 3D active contour segmentation ofanatomical structures: Significantly improved efficiency and reliability. Neuroimage , 31(3):1116–1128, 2006.4 |
PUs7MSra82U | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionApplying spatial attention-based autoencoder learning oflatent representation for unsupervised characterization oftumor microenvironmentEditors: Under Review for MIDL 2023AbstractSpatial tissue imaging technologies enable highly resolved spatial characterization of cellu-lar phenotypes however today it depends on laborious manual annotation and molecularlabels to understand tissue organization. As a result, we are not optimally leveraginghigher-order patterns of cell organization potentially connected to disease pathology. Ourapproach combines information on cellular phenotypes with the physical proximity of cellsto accurately identify organ-specific microanatomical in the tumor microenvironment.Keywords: Autoencodeurs, spatial attention, computational pathology1. IntroductionSpatial biology applications are advancing the study of tissue organization and cell2cellcommunication at an unprecedented scale. New tools are required to store, analyze andvisualize the large diversity of produced data(Barmpoutis et al., 2021)(Ahmedt-Aristizabalet al., 2022). Autoencoders (AE) have been used to visualize or generate interpretableembeddings from biological single-cell data, an example is scvis(Ding et al., 2018)(Bechtet al., 2019). More recently, new autoencoder models, in particular graph-based AE, usespatial information along with phenotypical information(Hu et al., 2021)(Dong and Zhang,2022). But attention neural networks are becoming popular and new autoencoder modelsusing attention mechanism have shown promising results with biological data(Dong andZhang, 2022)(Fu et al., 2021). Imaging mass cytometry (IMC) it is a novel imaging tech-nology, with more than 40 markers per pixel, that due the high content natures posescomputational complexity challenges. Furthermore, due to the measurement protocol, thedata can be noisy prone to artefacts and create biaises difficult to handle by the clusteringapproaches. To address all these challenges we are proposing a new approach to leveragespatial attention AE for IMC tissue structure phenotyping.2. Materials and methodsData: Patients included in the study are advanced stage and metastatic treated with dif-ferent immune-oncology therapies(Camps et al., 2023). IMC assays were applied to selectedROIs with 40-marker panel (DAPI + 39 molecular probes) to characterize the TME. ROIs(3-4 per patient) were selected by an expert histologist to represent at high resolution (1micro-meter) major tumor anatomical regions (Figure.1A). Data split: To fully leverageall available data (Table.1) we used dataset I (2 patients, 8 ROIs) for training and datasetII (39 patients, 128 ROIs) and III (139 patients, 556 ROIs) for independent testing.©2023 .Design of autoencoder experimentations: (Classic AE) The model consideredas a baseline it is a regular fully connected three-layer autoencoder with a wassersteinregularization (loss term using Wasserstein distance on the embeddings)(Kolouri et al.,2018). (Classic spatially aware AE) For each cell, a value representing the distance tospecified cell populations (annotated histological ROIs) is constructed. Outlier detectionsalgorithms (i.e. Isolation Forest algorithm) is used to encode with min-max scaling, 1 (cellis close to a histological cluster) and 0 (it is not part a histological ROI) (Figure.1B).(Classic spatially aware AE with attention mechanism) . Here we introduced theattention mechanisms by assigning to a sample a linear combination of specified inputs. Weemphasize attention to histological ROIs (i.e. cell populations of interest like tumor andstroma) and relative distances (Figure.1C-D). Type of connections: For fully connectedAE all layers are full connected and input features equally weighted. For sparse connections,we introduce sparsity by enriching the model with more biological sense, thus ”regrouping”features that have biological relationships(Alessandri et al., 2020). These relationships arebased on expert cell annotations. In practice, this is done by enforcing sparse connectionsin the first layer of the encoder. Hidden nodes of the first layer only receive inputs from thenodes associated to this node (Figure.1B). Hyperparameter tunning and clustering:Model hyperparameters (i.e. number of layers, dimension of the hidden layers, dimensionof the latent space, batch size, Wasserstein regularization weights, learning rate, activationsfunctions) were optimised using a grid-search paradigm (only for the spatial attention AE).A trained model for each combination (model, connections type) was used to computethe automated mutual information (AMI) out of different clustering strategies: Kmeans,Phenograph(Levine et al., 2015) and Spatialsort(Lee et al., 2022).Loss metrics Model Connections Train Test 1 Test 2Reconstrution error (MSE)ClassicFully 1.31±0.04 2.13±0.06 2.01±0.05Sparse 1.67±0.09 2.48±0.11 2.37±0.10Classic SpatialFully 1.48±0.04 2.58±0.07 2.38±0.07Sparse 1.84±0.08 2.99±0.14 2.76±0.12Spatial AttentionFully 1.44±0.05 2.28±0.06 2.17±0.06Sparse 1.84±0.10 2.65±0.11 2.55±0.11Table 1: Reconstruction error and comparison between different autoencoders.3. ResultsThe results (Table 1) were smoothed with 50 iterations per model and the sliced-Wassersteindistance was evaluated 50 times for each embedding. The values are presented as mean ±std. The autoencoder succeeds in capturing important features of the data. The originaldata is indeed well reconstructed from embeddings of dimension 16 (versus an original di-mension of 43). The regularization using Wasserstein distance enables the latent space tohave the shape of a Gaussian distribution, which is beneficial for the clustering techniques(Figure.1E). Sparse connections and spatial information impact is however harder to no-tice. Hard to analyse the results concerning clustering as there is no ground truth novelsphenotypes.2Short TitleFigure 1: Latent representation. A) Illustration of selected channels from raw IMC as wellas high power field and mask of selected region. B) Design of sparse connectionsguided by biological rationale. C) Spatial attention mechanism using major his-tological regions as anchors. D) Illustration of a ground truth histological ROIs.E) Clustering AMI criteria based on different AE. F) 2D projections of cells(sampled from the training dataset I) using normalised raw features (left) andAE embeddings with spatial information integrated (middle) as well as attentionmechanism (right).4. ConclusionsOur experiments shows that AE succeeds in capturing important features of the data how-ever the impact of spatial information it is harder to notice using AMI or the Wassersteindistance and by selecting a specific region as well as other evaluation metric. The influenceof the model and the type of connections (sparse or full) can clearly observed when lookingat the overlay of cell phenotypes labels on AE embeddings (Figure.1F). The different cellpopulations are more well defined with AE (middle and right graphics). The overlappingcell phenotypes (indicated with black oval shapes in Figure.1F) within tumour we can iden-tify a cluster of cells with both DC and tumour cell inside recently shown to highlights apatient not-likely to respond to immune therapies(Cohen et al., 2022)(Oh et al., 2020).3AcknowledgmentsAcknowledgments withheld.ReferencesDavid Ahmedt-Aristizabal, Mohammad Ali Armin, Simon Denman, Clinton Fookes, andLars Petersson. A survey on graph-based deep learning for computational histopathology.Computerized Medical Imaging and Graphics , 95:102027, 2022. ISSN 0895-6111. doi:10.1016/j.compmedimag.2021.102027.Luca Alessandri, Francesca Cordero, Marco Beccuti, Nicola Licheri, Maddalena Arigoni,Martina Olivero, Maria Flavia Di Renzo, Anna Sapino, and Raffaele Calogero. Sparsely-connected autoencoder (SCA) for single cell RNAseq data mining. NPJ systems biologyand applications , 7(1):1, 2020. doi: 10.1038/s41540-020-00162-6.Panagiotis Barmpoutis, Matthew Di Capite, Hamzeh Kayhanian, William Waddingham,Daniel C. Alexander, Marnix Jansen, and Francois Ng Kee Kwong. Tertiary lymphoidstructures (TLS) identification and density assessment on H&E-stained digital slides oflung cancer. PLoS ONE , 16(9):e0256907, 2021. doi: 10.1371/journal.pone.0256907.Etienne Becht, Leland McInnes, John Healy, Charles-Antoine Dutertre, Immanuel W HKwok, Lai Guan Ng, Florent Ginhoux, and Evan W Newell. Dimensionality reductionfor visualizing single-cell data using UMAP. Nature Biotechnology , 37(1):38–44, 2019.ISSN 1087-0156. doi: 10.1038/nbt.4314.Jordi Camps, Floriane No ̈ el, Robin Liechti, Lucile Massenet-Regad, Sidwell Rigade, LouG ̈ otz, Caroline Hoffmann, Elise Amblard, Melissa Saichi, Mahmoud M. Ibrahim, JackPollard, Jasna Medvedovic, Helge G. Roider, and Vassili Soumelis. Meta-Analysis ofHuman Cancer Single-Cell RNA-Seq Datasets Using the IMMUcan Database. CancerResearch , 83(3):OF1–OF11, 2023. ISSN 0008-5472. doi: 10.1158/0008-5472.can-22-0074.Merav Cohen, Amir Giladi, Oren Barboy, Pauline Hamon, Baoguo Li, Mor Zada, AnnaGurevich-Shapiro, Cristian Gabriel Beccaria, Eyal David, Barbara B. Maier, MarkBuckup, Iris Kamer, Aleksandra Deczkowska, Jessica Le Berichel, Jair Bar, Matteo Ian-nacone, Amos Tanay, Miriam Merad, and Ido Amit. The interaction of CD4+ helperT cells with dendritic cells shapes the tumor microenvironment and immune checkpointblockade response. Nature Cancer , 3(3):303–317, 2022. doi: 10.1038/s43018-022-00338-5.Jiarui Ding, Anne Condon, and Sohrab P. Shah. Interpretable dimensionality reduction ofsingle cell transcriptome data with deep generative models. Nature Communications , 9(1):2002, 2018. doi: 10.1038/s41467-018-04368-5.Kangning Dong and Shihua Zhang. Deciphering spatial domains from spatially resolvedtranscriptomics with an adaptive graph attention auto-encoder. Nature Communications ,13(1):1739, 2022. doi: 10.1038/s41467-022-29439-6.4Short TitleHuazhu Fu, Hang Xu, Kelvin Chong, Mengwei Li, Kok Siong Ang, Hong Kai Lee, JingjingLing, Ao Chen, Ling Shao, Longqi Liu, and Jinmiao Chen. Unsupervised Spatially Em-bedded Deep Representation of Spatial Transcriptomics. bioRxiv , page 2021.06.15.448542,2021. doi: 10.1101/2021.06.15.448542.Jian Hu, Xiangjie Li, Kyle Coleman, Amelia Schroeder, Nan Ma, David J. Irwin, Edward B.Lee, Russell T. Shinohara, and Mingyao Li. SpaGCN: Integrating gene expression, spatiallocation and histology to identify spatial domains and spatially variable genes by graphconvolutional network. Nature Methods , 18(11):1342–1351, 2021. ISSN 1548-7091. doi:10.1038/s41592-021-01255-8.Soheil Kolouri, Phillip E Pope, Charles E Martin, and Gustavo K Rohde. Sliced-WassersteinAutoencoder: An Embarrassingly Simple Generative Model. arXiv , 2018. doi: 10.48550/arxiv.1804.01947.Eric Lee, Kevin Chern, Michael Nissen, Xuehai Wang, IMAXT Consortium, Chris Huang,Anita K Gandhi, Alexandre Bouchard-Cˆ ot ́ e, Andrew P Weng, and Andrew Roth. Spa-tialSort: A Bayesian Model for Clustering and Cell Population Annotation of SpatialProteomics Data. bioRxiv , page 2022.07.27.499974, 2022. doi: 10.1101/2022.07.27.499974.Jacob H. Levine, Erin F. Simonds, Sean C. Bendall, Kara L. Davis, El-ad D. Amir,Michelle D. Tadmor, Oren Litvin, Harris G. Fienberg, Astraea Jager, Eli R. Zunder,Rachel Finck, Amanda L. Gedman, Ina Radtke, James R. Downing, Dana Pe’er, andGarry P. Nolan. Data-Driven Phenotypic Dissection of AML Reveals Progenitor-likeCells that Correlate with Prognosis. Cell, 162(1):184–197, 2015. ISSN 0092-8674. doi:10.1016/j.cell.2015.05.047.Soyoung A. Oh, Dai-Chen Wu, Jeanne Cheung, Armando Navarro, Huizhong Xiong,Rafael Cubas, Klara Totpal, Henry Chiu, Yan Wu, Laetitia Comps-Agrar, Andrew M.Leader, Miriam Merad, Merone Roose-Germa, Soren Warming, Minhong Yan, Jeong M.Kim, Sascha Rutz, and Ira Mellman. PD-L1 expression by dendritic cells is a keyregulator of T-cell immunity in cancer. Nature Cancer , 1(7):681–691, 2020. doi:10.1038/s43018-020-0075-x.5 |
newlahoISt1 | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 SubmissionLearning Retinal Representations from Multi-modal Imagingvia Contrastive Pre-trainingEmese S ̈ ukei1emese.suekei@meduniwien.ac.atElisabeth Rumetshofer2rumetshofer@ml.jku.atNiklas Schmidinger2schmidinger@ml.jku.atUrsula Schmidt-Erfurth1ursula.schmidt-erfurth@meduniwien.ac.atG ̈ unter Klambauer2klambauer@ml.jku.atHrvoje Bogunovi ́ c1hrvoje.bogunovic@meduniwien.ac.at1OPTIMA Lab, Department of Ophthalmology, Medical University of Vienna, Austria2LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, AustriaEditors: Accepted for Poster Presentation at MIDL 2023AbstractContrastive representation learning techniques trained on large multi-modal datasets, suchas CLIP and CLOOB, have demonstrated impressive capabilities of producing highly trans-ferable representations for different downstream tasks. In the field of ophthalmology, largemulti-modal datasets are conveniently accessible as retinal imaging scanners acquire both2D fundus images and 3D optical coherence tomography (OCT) to evaluate the disease.Motivated by this, we propose a CLIP/CLOOB objective-based model to learn joint rep-resentations of the two retinal imaging modalities. We evaluate our model’s capabilityto accurately retrieve the appropriate OCT based on a fundus image belonging to thesame eye. Furthermore, we showcase the transferability of the obtained representations byconducting linear probing and fine-tuning on several prediction tasks from OCT.Keywords: contrastive learning, predictive modelling, multi-modal imaging, retina1. IntroductionSelf-supervised learning aims to learn representations without manual labelling, often throughcontrastive or reconstructive tasks, enabling efficient downstream task learning with fewerannotated labels. In medical imaging, learning a meaningful representation by jointly mod-elling different imaging modalities can facilitate disease progression modelling and person-alised patient management. In retinal imaging, combining 2D fundus photography or near-infrared reflective imaging with 3D optical coherence tomography (OCT) is readily availableand can provide complementary information about the retina’s structure. However, existingmulti-modal methods in ophthalmology are fusion-based and rely on supervised learning sig-nals (Jin et al., 2022), while unsupervised multi-modal contrastive representation learningin this field remains largely under-explored.To address this gap, we propose a multi-modal pre-training method based on contrastiveobjectives (CLIP (Radford et al., 2021) or CLOOB (F ̈ urst et al., 2022)) to learn proficientOCT and fundus image encoders without the need for expert annotations. We show thatour method can provide both a retrieval system and encoders to obtain comprehensive OCTand fundus image representations for several downstream tasks.©2023 CC-BY 4.0, E. S ̈ ukei1et al.S ̈ukei1Rumetshofer2Schmidinger2Schmidt-Erfurth1Klambauer2Bogunovi ́c120x224x224 OCT volume encoder hvFully-connected layerPredictions A. Pre-training of the CLIP/CLOOB models B. OCT-based predictive modelling 224x224 20x224x224 Fundus image encoder hfOCT volume encoder hvUv Hopfield retrieval Vv...Uf Hopfield retrieval Vf...InfoNCE InfoLOOB f1f2f3fNfN-1vNvN-1v3v2v1Figure 1: A.Contrastive pre-training of the encoders of the two retinal imaging modalitiesusing CLIP/CLOOB. B.Using the pre-trained OCT volume for downstream tasks.2. MethodThe proposed contrastive framework (Figure 1 A) utilises CLIP and CLOOB objectives,InfoNCE and InfoLOOB, respectively. It employs ResNet18 with pre-trained ImageNetweights as the backbone image encoder and VideoResNet18 with pre-trained Kinetics (Kayet al., 2017) weights as the backbone volume encoder for fundus images and OCT volumes.The dimension of the embedding space is set to d= 512, which determines the output sizeof both encoders. The hyper-parameters and training strategies suggested by OpenCLIP(Wortsman et al., 2022) and CLOOB are used. After contrastive pre-training, the fundusencoder is discarded, and only the volume encoder is used to extract descriptive featurerepresentations for downstream tasks. This is achieved by adding a single fully-connectedlayer after the encoder (Figure 1 B). To demonstrate the models’ feature extractor capabil-ities, linear probing is performed by freezing the encoder weights and training only the lastlayer. Additionally, we fine-tuned the entire model for the downstream tasks.3. Experiments & ResultsDataset and preprocessing For pre-training the CLIP/CLOOB models, the study useslarge-scale data from OPTIMA Lab imaging datasets. We extracted 70,767 fundus pho-tography and OCT volume pairs acquired from 2,987 patients with neovascular age-relatedmacular degeneration (nAMD) using Spectralis, Cirrus, Nidek, or Topcon scanners. As anadditional external dataset for the downstream tasks, we use data from the HARBOR trial,which contains OCT volumetric scans of 1098 patients undergoing treatment for nAMD,with corresponding clinical and treatment labels (Busbee et al., 2013). To allow large batchsizes, we downsize the fundus images and the OCT B-scans to 224x224. For OCT volume,we then sample 20 B-scans randomly using a Gaussian probability distribution centred onthe central B-scan. Finally, the images/volumes are normalised.Contrastive pre-training The pre-training dataset is divided into train-validation-testsets at a ratio of 80%-15%-5%, using 3,537 fundus image-OCT volume pairs for the hold-out set to evaluate the models’ retrieval ability. In this set, CLIP ranked the correct OCTvolume first in 10.51% of cases, while CLOOB ranked the correct OCT volume first in2Query image Central B-scan of top ranked retrieved OCT volumes/ Corresponding fundus images 1 2 3 4 5 6 7 8 9 10Patient 1089, OD Day 90, Cirrus Patient 1089, OD Day 90, Cirrus Patient 13063, OD Day 720 ,Cirrus Patient 13063, OD Day 660, Cirrus Patient 4127, OD Day 195, Cirrus Patient 4158, OD Day 279, Cirrus Patient 4158, OD Day 223, Cirrus Patient 12167, OD Day 518, Cirrus Patient 13138, OS Day 690, Cirrus Patient 12947, OD Day 0, Cirrus Patient 13063, OD Day 240, Cirrus Figure 2: Example results for the retrieval task on the hold-out test set using CLOOB.Orange boxes indicate the matching fundus image - OCT volume pair.11.36% of cases. Figure 2 provides a qualitative example of this. It’s important to notethat this task is close-to-impossible for human experts to perform accurately.External downstream tasks We define three downstream tasks on the external dataset,namely: central subfield thickness (CST) prediction, best corrected visual acuity (BCVA)prediction (Snellen equivalent of <20/60), and high treatment need (TN) forecasting. Thefirst is a regression task, while the latter are binary classification tasks. To evaluate themodels’ performance, we use a 5-fold cross-validation technique, where we split the externaldataset into train-validation-test sets at a ratio of 80%-10%-10% per patient, stratified bythe target outcome. Our preliminary results (Table 1) show a notable improvement usingthe CLIP/CLOOB pre-trained encoders over the Kinetics baseline across the different tasks.Table 1: Results of linear probing and fine-tuning on the downstream tasks. The mean andstandard deviation of the performance measures over the 5-folds are reported.Outcome InitialisationLinear probing Fine-tuningAUROC AUPRC AUROC AUPRCBCVA Kinetics 0.788 (0.027) 0.717 (0.028) 0.854 (0.033) 0.784 (0.058)CLIP 0.847 (0.044) 0.775 (0.088) 0.866 (0.052) 0.812 (0.066)CLOOB 0.818 (0.048) 0.739 (0.103) 0.872 (0.029) 0.801 (0.053)High TN Kinetics 0.481 (0.232) 0.354 (0.194) 0.811 (0.011) 0.675 (0.021)CLIP 0.808 (0.047) 0.690 (0.075) 0.840 (0.020) 0.707 (0.051)CLOOB 0.788 (0.089) 0.606 (0.137) 0.868 (0.031) 0.763 (0.069)RMSE [ μm] R-squared RMSE [ μm] R-squaredCST Kinetics 114.720 (37.496) 0.107 (0.098) 85.368 (12.068) 0.354 (0.185)CLIP 97.222 (25.108) 0.243 (0.098) 71.730 (12.764) 0.551 (0.205)CLOOB 102.344 (30.081) 0.175 (0.043) 76.683 (17.073) 0.535 (0.131)Conclusions Our initial findings suggest that using contrastive pre-training with multi-modal retinal images yields transferable and meaningful OCT volume representations, whichcan be leveraged for other clinical tasks. We plan to conduct additional analysis on diversedatasets and downstream tasks to evaluate the approach’s potential and limitations better.AcknowledgmentsThis work received financial support from the FWF Austrian Science Fund (grant numberFG 9-N).3S ̈ukei1Rumetshofer2Schmidinger2Schmidt-Erfurth1Klambauer2Bogunovi ́c1ReferencesBrandon G Busbee, Allen C Ho, David M Brown, Jeffrey S Heier, Ivan J Su ̃ ner, Zhengrong Li,Roman G Rubio, Phillip Lai, HARBOR Study Group, et al. Twelve-month efficacy and safetyof 0.5 mg or 2.0 mg ranibizumab in patients with subfoveal neovascular age-related maculardegeneration. Ophthalmology , 120(5):1046–1056, 2013.Andreas F ̈ urst, Elisabeth Rumetshofer, Johannes Lehner, Viet T. Tran, Fei Tang, Hu-bert Ramsauer, David Kreil, Michael Kopp, G ̈ unter Klambauer, Angela Bitto, and SeppHochreiter. Cloob: Modern hopfield networks with infoloob outperform clip. InS. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advancesin Neural Information Processing Systems , volume 35, pages 20450–20468. Curran Asso-ciates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/file/8078e76f913e31b8467e85b4c0f0d22b-Paper-Conference.pdf .Kai Jin, Yan Yan, Menglu Chen, Jun Wang, Xiangji Pan, Xindi Liu, Mushui Liu, Lixia Lou, YaoWang, and Juan Ye. Multimodal deep learning with feature level fusion for identification ofchoroidal neovascularization activity in age-related macular degeneration. Acta Ophthalmologica ,100(2):e512–e520, 2022.Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijaya-narasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics humanaction video dataset. arXiv preprint arXiv:1705.06950 , 2017.Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visualmodels from natural language supervision. In International conference on machine learning , pages8748–8763. PMLR, 2021.Mitchell Wortsman, Gabriel Ilharco, Jong Wook Kim, Mike Li, Simon Kornblith, Rebecca Roelofs,Raphael Gontijo Lopes, Hannaneh Hajishirzi, Ali Farhadi, Hongseok Namkoong, et al. Robustfine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Visionand Pattern Recognition , pages 7959–7971, 2022.4 |
e0QulGVGCS | Medical Imaging with Deep Learning 2023 Short Paper { MIDL 2023 submissionSemi-supervised learning in perivascular space segmentationusing MRI imagesYaqiong Chai1yaqiongc@usc.eduHedong Zhang1hedongzh@usc.eduGilsoon Park1gp446@usc.eduErika Lopez1;3erikalopez2800@gmail.comCong Zang1;2congzang@usc.eduJongmok Ha1;4jongmok3245@gmail.comOmar Elhawary1;3elhawary@usc.eduHosung Kim1hosungki@usc.edu1Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine,University of Southern California (USC), Los Angeles, CA, U.S.2Neuroscience Graduate Program, USC, Los Angeles, CA, U.S.3Dornsife College of Letters, Art and Sciences, USC, Los Angeles, CA, U.S.4Department of Neurology, Samsung Medical Center, Seoul, South KoreaCorresponding authorAbstractAccurate segmentation of perivascular space (PVS) is essential for its quantitative anal-ysis and clinical applications. Various segmentation methods have been proposed, butsemi-supervised learning methods have never been attempted. Here, we propose a 3Dmulti-channel, multi-scale semi-supervised PVS segmentation (M2SS-PVS) network. Weincorporated multi-scale image features in the encoder and applied a few strategies tomitigate class imbalance issue. Our M2SS-PVS network segmented PVS with the highestaccuracy and high sensitivity among all the tested supervised and semi-supervised methods.Keywords: semi-supervised learning, perivascular space; MRI.1. IntroductionPerivascular spaces (PVSs) are extracellular spaces containing interstitial or cerebrospinaluid surrounding cerebral small penetrating arteries and veins (Jessen et al., 2015). PVS isvisible on brain MRI scans of healthy individuals but also considered a hallmark of agingand an early anomaly of neuro-degenerative disease as it becomes enlarged along with agingand Parkinson's disease, multiple sclerosis, and small vessel disease (Zhu et al., 2010).PVS burden is generally evaluated using semi-quantitative rating scales on T1- or T2-weighted MRI (Potter et al., 2015). Manual detection and delineation of PVS are currentlythe gold standard for neuro-radiological evaluation, but it is highly labor-intensive and sub-ject to inter-rater bias. Early automated approaches were based on unsupervised methodsincluding simple thresholding, edge detection, or morphological operations but yielded poorresults (Uchiyama et al., 2008). More recent studies have adopted supervised methods forPVS segmentation. For example, Lan H. et al. incorporated a convolutional neural net-work (CNN) into weakly supervised learning framework and achieved a Fscore of 0.76©2023 Y. Chai, H. Zhang, G. Park, E. Lopez, C. Zang, J. Ha, O. Elhawary & H. Kim.Chai Zhang Park Lopez Zang Ha Elhawary Kimwithin the region of interests, requiring a further improvement (Lan et al., 2023). However,PVS segmentation is very challenging due to 1) various shape and size of PVS; 2) class-imbalance issues. In addition, supervised deep learning generally requires a large size oftraining samples with labels.Therefore, we propose a 3D multi-channel (T1, T2 MRI), multi-scale semi-supervisedperivascular space segmentation (M2SS-PVS) network, to leverage the information con-tained in a larger number of unlabeled images. To evaluate the performance of our proposedM2SS-PVS network, we computed three metrics: DSC, the absolute percentage volume dif-ference (AVD), and recall among M2SS-PVS, supervised multi-scale and mono-scale featureextraction, and semi-supervised learning with mono-scale feature extraction.2. Materials and MethodMaterials. In this work, we used T1 and T2 turbo spin echo images (both in the resolutionof 0.8mm3) in the Human Connectome Project in Aging (HCP-A) (Bookheimer et al.,2019). Our subjects were sampled considering the age range of 45 years or older (63.8 14.1)and sex balance (18F). Twelve out of 33 subjects were manually segmented for PVS by twomedical students reviewing both T1 and T2 images and evaluated by a neurologist.Figure 1: The architecture of the proposed M2SS-PVS network. It consists of an encoder and decodernetwork. A: The network follows an encoder-decoder structure. The supervised learning part is optimizedby focal loss. In the unsupervised part, the pseudo label produced by T1 and T2 and the predicted resultsproduced by the augmented images are optimized by consistency loss. B: We designed multi-scale featureextraction in the encoder to incorporate dierent level of features in the successive layers.Network Architecture. The framework of the proposed M2SS-PVS network is illustratedin Fig.1. The supervised model is fed with labeled T1 and T2 images in the two complemen-tary channels, and the unsupervised model is fed with unlabeled T1 and T2 images as wellas those pairs augmented by applying standard Gaussian noise. The supervised loss wasdesigned as:Ls=1NPNi=1focal loss (Plabel;Ppred), wherePpred=Model(xlabel). Focal loss(Lin et al., 2017) is designed to mitigate imbalanced class learning and optimize the super-vised model. A consistency loss (binary cross entropy) was calculated between the pseudolabels (on the original unlabeled images) and the PVS predicted on the augmented im-2ages to optimize the unsupervised model: Lu=1NuPNui=1 1[P0pred>]BCE (P0pseudo;P0pred),whereP0pseudo=Model(xunlabel );P0pred=Model(Augmentation (xunlabel )), 1is the indica-tor function and is the condence threshold ( = 0:8) to lter pseudo labels.To robustly capture the various size and shape of PVS and various levels of anatomicdetails in surrounding brain tissues, we integrating the multi-scale feature extraction intothe encoder network of the proposed M2SS-PVS. As shown in Figure 1B, the rst level offeature extraction consists of a convolutional and a pooling operation, and the second levelreverses the order of the operations. In this manner, these multi-scale features are thenfed into the corresponding encoder level. We design four multi-scale features in descendingorder to be fed into the encoder.Implementations and experiments. The proposed networks were implemented withPython using the PyTorch deep learning library. The models were trained on Tesla V100GPU. We randomly split the labeled subjects into training, validation, and testing sets(8:1:1) for 5-fold validation. Each subject was divided into mini patches with the size of32x32x32 voxels (stride=16), which yielded 1540 patches for each subject. In supervisedlearning part, the model was fed only with labeled T1 and T2 images, while in semi-supervised learning part, M2SS-PVS was trained using both labeled and unlabeled imagessimultaneously. To alleviate class imbalance, we only chose the patches that include PVSvoxels for a warm-up training (epoch=20).Evaluation. We compared the results of M2SS-PVS with the supervised methods using thesame encoder-decoder architecture 1) with and 2) without multi-scale feature extraction,and 3) semi-supervised methods without multi-scale feature extraction.3. Results and conclusionsThe results are summarized in Table 1. Our proposed network achieved the highest DSCand the lowest AVD compared to the other three networks. Mono-scale semi-supervisedmodel yielded the highest recall but the 2ndlowest DSC with the largest variance, suggest-ing its sensitivity to PVS, but inaccurate and inconsistent segmentation across individualPVSs. On the other hand, M2SS-PVS segmented PVS with the highest accuracy, highestconsistency (lowest variance), and high sensitivity.Table 1: The performance of networks in Dice similarity coecient (DSC), absolute percentage volumedierence (AVD), and Recall (mean standard deviation).DSC AVD(%) RecallSupervised (mono-scale) 0.500.15 5511 0.700.19Supervised (multi-scale) 0.560.44 5183 0.540.44Semi-supervised (mono-scale) 0.510.75 4367 0.790.16M2SS-PVS (multi-scale) 0.650.12 3495 0.750.07Our results demonstrate that the proposed method can improve PVS segmentation. Wedid not compare our network with other semi-supervised learning methods, yet, it is worthnoting that our study was the rst to employ a semi-supervised learning framework withmodication to address the challenges involved in PVS segmentation task.3Chai Zhang Park Lopez Zang Ha Elhawary KimAcknowledgmentsThis work is supported by the Alzheimer Disease Research Center (ADRC) at Universityof Southern California. Data were provided by the Human Connectome Project, WU-MinnConsortium (Principal Investigators: David Van Essen and Kamil Ugurbil; 1U54MH091657)funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuro-science Research; and by the McDonnell Center for Systems Neuroscience at WashingtonUniversity.ReferencesSusan Y Bookheimer, David H Salat, Melissa Terpstra, Beau M Ances, Deanna M Barch, Randy L Buckner,Gregory C Burgess, Sandra W Curtiss, Mirella Diaz-Santos, Jennifer Stine Elam, et al. The lifespanhuman connectome project in aging: an overview. Neuroimage , 185:335{348, 2019.Nadia Aalling Jessen, Anne Soe Finmann Munk, Iben Lundgaard, and Maiken Nedergaard. The glymphaticsystem: a beginner's guide. 40:2583{2599, 2015.Haoyu Lan, Kirsten M Lynch, Rachel Custer, Nien-Chu Shih, Patrick Sherlock, Arthur W Toga, FarshidSepehrband, and Jeiran Choupan. Weakly supervised perivascular spaces segmentation with salientguidance of frangi lter. Magnetic Resonance in Medicine , 2023.Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense objectdetection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) , Oct 2017.Gillian M Potter, Francesca M Chappell, Zoe Morris, and Joanna M Wardlaw. Cerebral perivascular spacesvisible on magnetic resonance imaging: development of a qualitative rating scale and its observer relia-bility. Cerebrovascular diseases , 39(3-4):224{231, 2015.Yoshikazu Uchiyama, Takuya Kunieda, Takahiko Asano, Hiroki Kato, Takeshi Hara, Masayuki Kanematsu,Toru Iwama, Hiroaki Hoshi, Yasutomi Kinosada, and Hiroshi Fujita. Computer-aided diagnosis schemefor classication of lacunar infarcts and enlarged virchow-robin spaces in brain mr images. In 200830th Annual International Conference of the IEEE Engineering in Medicine and Biology Society , pages3908{3911. IEEE, 2008.Yi-Cheng Zhu, Christophe Tzourio, A cha Soumar e, Bernard Mazoyer, Carole Dufouil, and Hugues Chabriat.Severity of dilated virchow-robin spaces is associated with age, blood pressure, and mri markers of smallvessel disease: a population-based study. Stroke , 41(11):2483{2490, 2010.4 |
CU82z90ppTS | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionUncovering Structural-Functional Coupling Alterations forAlzheimer’s DiseasesTingting Dan1Tingting Dan@med.unc.edu1Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USAGuorong Wu1,2grwu@med.unc.edu2Department of Computer Science, University of North Carolina at Chapel Hill, NC, USAEditors: Under Review for MIDL 2023AbstractA confluence of neuroscience and clinical studies suggests that disrupted structuralconnectivity (SC) and functional connectivity (FC) in the brain is an early sign of neu-rodegenerative diseases. However, current methods lack the neuroscience foundation tounderstand how these altered coupling mechanisms contribute to cognitive decline. To ad-dress this issue, we spotlight a neural oscillation model that characterizes the behavior ofneural oscillators coupled via nerve fibers throughout the brain. Tailored a physics-guidedgraph neural network (GNN), which can predict self-organized functional fluctuations andgenerate a novel biomarker for early detection of neurodegeneration through altered SC-FC coupling. Our method outperforms conventional coupling methods, providing higheraccuracy and revealing the mechanistic role of coupling alterations in disease progression.We evaluate the biomarker using the ADNI dataset for Alzheimer’s disease diagnosis.Keywords: Brain structure-functional coupling, Imaging biomarkers, Alzheimer’s dis-eases.1. IntroductionThe human brain is a complex system with spontaneous functional fluctuations (Bassettand Sporns, 2017) that can be affected by both normal aging and neuropathology events.Understanding the relationship between structural connectivity (SC) and functional con-nectivity (FC) (Badhwar et al., 2017) is crucial for identifying effective interventions fordiseases such as Alzheimer’s (Cummings et al., 2007). Current research examines the sta-tistical association between SC and FC using various approaches (Gu et al., 2021; Park et al.,2008), but lacks a comprehensive understanding of the system-level coupling mechanismsthat underlie the emergence of brain functions. To address this, we propose a new approachthat leverages established biophysics models to uncover SC-FC coupling mechanisms andgenerate biomarkers with greater neuroscience insight.In this regard, we conceptualize the human brain as a complex system where each regionis associated with a neural population that exhibits frequency-specific oscillations. Inspiredby the success of the Kuramoto model (Kuramoto and Kuramoto, 1984) in modeling coupledsynchronization in complex systems, we describe the physical coupling of these oscillatoryunits via nerve fibers observed in diffusion-weighted MRI images. The resulting phaseoscillation process on top of the SC topology generates self-organized fluctuation patternsin the blood-oxygen-level-dependent (BOLD) signal (Fig. 1 top). We propose a novel graph©2023 CC-BY 4.0, T. Dan & G. Wu.Dan Wuneural network (GNN) to learn the dynamics of SC-FC coupling from human connectomedata and provide insight into the evolving relationship between SC and FC through phaseoscillations. Additionally, we propose to use learned system dynamics to yield new SC-FC coupling biomarkers (Fig. 1 bottom). We evaluate these biomarkers using the ADNIdataset (Petersen et al., 2010) and find promising results for recognizing early signs ofneurodegeneration, demonstrating potential for future network neuroscience studies.Spontaneous brain activityStructural connectivityBOLD signalsHilbert transformCouplingTimeOscillationqthe level of synchronization ofthe Noscillating signalsSynchronizationPhaseθ!tDeep Kuramoto modelTimesynchronizationdesynchronizationvt−1ptvtfcvivjwijvi,piSC-FC biomarkerKuramotoorder parameterFigure 1: The spatio-temporal learning framework of our proposed deep Kuramoto model.2. Method and ExperimentWe model the human brain network G= (Ξ, W) as a complex system with Nbrain regionsΞ ={ξi|i= 1, ..., N}connected by neuronal fibers (i.e., SC) W= [wij]∈ RN×N, wherethe neural oscillation status of each region is determined by an intrinsic variable of brainrhythm vi(t). The synchronized oscillations of multiple brain oscillators give rise to self-organized patterns of functional fluctuations. To test this hypothesis, we propose a deepmodel that reproduces the topology of the traditional FC matrix Q= [qij]Ni,j=1∈ RN×N,which is obtained from the BOLD signal xi(t)(i= 1, ..., N, t = 1, ..., T ) ofNregions, byusing the phase information of neural oscillations. We use the proposed deep Kuramotomodel to constrain the synchronization of coupled oscillators.Deep Kuramoto Model for SC-FC Coupling Mechanism. We first propose a gen-eral formulation to model a nonlinear dynamical system as:dvidt=f(vi,H(xi)) +NXj̸=iwijc(vi,vj) (1)where the system dynamics is determined by the state variable of brain rhythm vion eachnode. Compared to the classic Kuramoto model (Breakspear et al., 2010), we estimatethe natural frequency ωithrough a non-linear function f(·), which depends on the currentstate variable viand the neural activity proxy xi. We use the Hilbert transform ( H(·))to extract the phase and amplitude information from BOLD signals (Chang and Glover,2010; Mitra et al., 2015), which has been widely used in functional neuroimaging research.2SC-FC CouplingWe formulate the frequency function as f(vi, pi), where pi=H(xi) represents the phaseinformation of time course xi. We then introduce a coupling physics function c(·,·) tomodel the relationship between two state variables viandvj, with their coupling strengthdetermined by the structural connectivity wij. The overview of our deep Kuramoto modelis shown in Fig. 1. Our input consists of time-invariant coupling information from the SCmatrix (top-right) and time-evolving phase information at each node pi(t) (top-left). Inthe blue box, our physics-guided deep Kuramoto model captures the dynamics of neuraloscillations in a spatio-temporal learning scenario. At each time point t, a fully-connectednetwork (FCN) and a GNN (Kipf and Welling, 2016) predict the first and second terms inEq. 1 for the current state viat each node ξi.Novel SC-FC Coupling Biomarkers. The valuable bi-product of our deep Kuramotomodel of neural oscillation is a system-level explanation of how the neuro-system dynamicsis associated with phenotypes such as clinical outcomes. In doing so, we introduce theKuramoto order parameters φtto quantify the synchronization level at time tasφt=1Nreal{PNi=1eiv(t)}, where real(·) denotes the real part of the complex number. In complexsystem areas, φis described as the synchronization level, aka. the metastability of thesystem (Pluchino and Rapisarda, 2006), transiting from complete chaos ( φt= 0) and fullysynchronization ( φt= 1). In this context, we propose a novel SC-FC coupling biomarkerΦ= (φt0, φt1, ..., φ tT) (bottom right corner in Fig. 1) which records the evolution of systemmetastability underlying the neural activity. SC-FC-META uses the Φto conduct thedownstream classification tasks, while SC-FC-Net is an end-to-end Φ-training based deepmodel. The experimental results are shown in Fig. 2, we mainly validate the neuroscienceinsight (identify brains at risk of AD) based on new SC-FC coupling biomarkers and obtaindecent results. This approach holds great promise for other neuroimaging applications.515102025CNAD*p<0.01Metastability transition count0.450.500.450.50020406080100120140Time(a)(b)(c)chaossync0.40.60.80.650.750.800.70AccSenSpeF1(e)0.2SC-FC-METASC-FC-NetSC-FC-NetLTCNetRNNGCNMethods(d)0.60Figure 2: (a) denotes the metastability transition count between CN and AD. (b) Snapshotof node phase visualizations at the chaos and synchronization stages. (c) Globaldynamic (order parameter φ) in coupling parameter space. (d) The classificationperformance ( AD vs. CN) on a shadow approach (SVM, blue) and our SC-FC-Net(green) by using our new learning-based SC-FC biomarker. Acc: accuracy,Sen: sensitivity, Sep, specificity, F1: F1-score. (e) The accuracies of diagnosingAD on four methods (LTCNet (Hasani et al., 2021)), GCN (Kipf and Welling,2016), RNN (Medsker and Jain, 2001).3Dan WuReferencesAmanPreet Badhwar, Angela Tam, Christian Dansereau, Pierre Orban, Felix Hoffstaedter,and Pierre Bellec. Resting-state network dysfunction in alzheimer’s disease: a systematicreview and meta-analysis. Alzheimer’s & Dementia: Diagnosis, Assessment & DiseaseMonitoring , 8:73–85, 2017.Danielle S Bassett and Olaf Sporns. Network neuroscience. Nature neuroscience , 20(3):353–364, 2017.Michael Breakspear, Stewart Heitmann, and Andreas Daffertshofer. Generative models ofcortical oscillations: neurobiological implications of the kuramoto model. Frontiers inhuman neuroscience , 4:190, 2010.Catie Chang and Gary H Glover. Time–frequency dynamics of resting-state brain connec-tivity measured with fmri. Neuroimage , 50(1):81–98, 2010.Jeffrey L Cummings, Rachelle Doody, and Christopher Clark. Disease-modifying therapiesfor alzheimer disease: challenges to early intervention. Neurology , 69(16):1622–1634, 2007.Zijin Gu, Keith Wakefield Jamison, Mert Rory Sabuncu, and Amy Kuceyeski. Heritabilityand interindividual variability of regional structure-function coupling. Nature Communi-cations , 12(1):4894, 2021.Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu Grosu. Liquidtime-constant networks. In Proceedings of the AAAI Conference on Artificial Intelligence ,volume 35, pages 7657–7666, 2021.Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutionalnetworks. arXiv preprint arXiv:1609.02907 , 2016.Yoshiki Kuramoto and Yoshiki Kuramoto. Chemical turbulence . Springer, 1984.Larry R Medsker and LC Jain. Recurrent neural networks. Design and Applications , 5:64–67, 2001.Anish Mitra, Abraham Z Snyder, Tyler Blazey, and Marcus E Raichle. Lag threads organizethe brain’s intrinsic activity. Proceedings of the National Academy of Sciences , 112(16):E2235–E2244, 2015.Chang-hyun Park, Soo Yong Kim, Yun-Hee Kim, and Kyungsik Kim. Comparison of thesmall-world topology between anatomical and functional connectivity in the human brain.Physica A: statistical mechanics and its applications , 387(23):5958–5962, 2008.R. C. Petersen, P. S. Aisen, L. A. Beckett, M. C. Donohue, A. C. Gamst, D. J. Harvey,C. R. Jack, W. J. Jagust, L. M. Shaw, A. W. Toga, J. Q. Trojanowski, and M. W. Weiner.Alzheimer’s disease neuroimaging initiative (adni). Neurology , 74(3):201–209, 2010.Alessandro Pluchino and Andrea Rapisarda. Metastability in the hamiltonian mean fieldmodel and kuramoto model. Physica A: Statistical Mechanics and its Applications , 365(1):184–189, 2006.4 |
hr1QLA_ykVp | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionZero-shot CT Field-of-view Completion with UnconditionalGenerative Diffusion PriorKaiwen Xu1kaiwen.xu@vanderbilt.eduAravind R. Krishnan1aravind.r.krishnan@Vanderbilt.eduThomas Z. Li1thomas.z.li@vanderbilt.eduYuankai Huo1yuankai.huo@vanderbilt.eduKim L. Sandler2kim.sandler@vumc.orgFabien Maldonado2fabien.maldonado@vumc.orgBennett A. Landman1,2bennett.landman@vanderbilt.edu1Vanderbilt University, 2301 Vanderbilt Place, Nashville, 37235, United States2Vanderbilt University Medical Center, 1211 Medical Center Drive, Nashville, 37232, United StatesEditors: Under Review for MIDL 2023AbstractAnatomically consistent field-of-view (FOV) completion to recover truncated body sectionshas important applications in quantitative analyses of computed tomography (CT) withlimited FOV. Existing solution based on conditional generative models relies on the fi-delity of synthetic truncation patterns at training phase, which poses limitations for thegeneralizability of the method to potential unknown types of truncation. In this study,we evaluate a zero-shot method based on a pretrained unconditional generative diffusionprior, where truncation pattern with arbitrary forms can be specified at inference phase.In evaluation on simulated chest CT slices with synthetic FOV truncation, the methodis capable of recovering anatomically consistent body sections and subcutaneous adiposetissue measurement error caused by FOV truncation. However, the correction accuracy isinferior to the conditionally trained counterpart.Keywords: Field-of-view extension, Denoising diffusion implicit model, Zero-shot learn-ing, Computed tomography1. IntroductionQuantitative analysis of medical images is less effective when body sections of interest arepartially truncated by limited imaging field-of-view (FOV). This is especially an issue foropportunistic assessment of body compositions using routine chest computed tomography(CT) (Troschel et al., 2020; Xu et al., 2021; Luo et al., 2021; Xu et al., 2022a). A previousstudy achieved anatomically consistent FOV extension of chest CT by training a genera-tive model conditioned on synthetic truncation patterns (Xu et al., 2022b). However, theeffectiveness of this approach heavily relies on the fidelity of simulated truncation patterns,which makes it difficult to generalize to applications with truncation patterns that are notconsidered in the simulation. Recent studies have demonstrated the possibility for zero-shotsampling of semantic plausible images conditioned on partially corrupted image data usingan unconditionally trained generative diffusion prior (Lugmayr et al., 2022; Fei et al., 2023).The conditioning information is only needed at the inference, making it extremely flexiblefor applications when corruption patterns are difficult to predict.©2023 CC-BY 4.0, K. Xu, A.R. Krishnan, T.Z. Li, Y. Huo, K.L. Sandler, F. Maldonado & B.A. Landman.Xu Krishnan Li Huo Sandler Maldonado LandmanIn this pilot study, we developed RePaint-DDIM, a variant of the RePaint frameworkproposed by (Lugmayr et al., 2022) modified for the reverse sampling scheme of denoisingdiffusion implicit models (DDIM) (Song et al., 2021) and evaluated the method for the taskof FOV completion of routine lung screening low-dose CT.2. MethodA generative diffusion model is trained to reverse a forward Markovian diffusion process thatprogressively turns an image into noise. Once developed, the model is capable of sampling arealistic image from random noise following a recurrent procedure. Compared to the originalsampling scheme of denoising diffusion probabilistic model (DDPM) (Ho et al., 2020), theDDIM reverse sampling scheme turns the generative process into a deterministic processand reduces the required sampling steps with a shortened sampling trajectory (Song et al.,2021).To condition the reverse sampling process on known image regions, RePaint (Lugmayret al., 2022) alters the process by iteratively replacing known regions of the intermediatereverse sampled image with forward sampled image at each step of the DDPM denoisingprocess. To address the disharmony of the generated parts of the image, an additionalresampling process is introduced by iterating forward diffusion, known region replacing,and denoising between adjacent sampling steps. In this study, we modified the RePaintInput (truncated FOV)Forward diffusion processDDIM sampling process conditioned on known regionReconstructedRandom noiseFigure 1: Overview of the mechanism for conditional field-of-view completion based-onunconditionally pre-trained generative diffusion prior.algorithm for the DDIM sampling process to take advantage of the shortened inference time.In the resampling steps, instead of iterating between adjacent denoising steps, the modifiedalgorithm iterates between the predicted fully denoised image ( x0) and intermediate samplednoisy image ( xt) in each DDIM sampling step. We called this modified version as RePaint-DDIM. An overview of the sampling workflow is demonstrated in Figure 1. Detailed stepsare provided in Algorithm 1.2CT Field-of-view Completion with Generative Diffusion PriorAlgorithm 1: RePaint-DDIM sampling algorithm for CT field-of-view completion.Input: ̃x0, CT slice with FOV truncation; m, FOV region mask; α0:αT∈(0,1], diffusionschedule; εθ, pretrained denoising model.Output: x0, CT slice with completed FOVxT← N (0, I);fort←Tto1doforu←1toUdoˆx0←1√αtxt−√1−αtεθ(xt;t);ˆx0←m⊙ ̃x0+ (1−m)⊙ˆx0;ifu < U thenε← N (0, I);xt←√αtˆx0+√1−αtε;endendxt−1←√αt−1ˆx0+√1−αt−1εθ(xt;t);end3. Experiment and DiscussionWe pretrained an unconditional DDPM using 71,319 lung cancer screening low-dose CTslices with complete body in FOV. Details of the collection of this dataset were provided in(Xu et al., 2022b). Slices were resized to 256 ×256 and clipped to HU range [ −1000,600].The model was trained with diffusion steps T= 1000, linear beta scheduler, and a batchsize of 24. The model was trained for 30,000 iterations. At inference, we use 50 denoisingsteps and 20 resampling steps for each denoising step.We evaluated the RePaint-DDIM on 2,657 simulated FOV truncation slices generatedfrom 145 withheld slices with complete body in FOV. The anatomical consistency of the syn-thetic body sections was quantitatively evaluated by the agreement of subcutaneous adiposetissue (SAT) measured on reconstructed slices with the same measurement on untruncatedversion following the method used in (Xu et al., 2022b). We compared the RePaint-DDIMwith a conditionally trained model as developed in (Xu et al., 2022b) (termed S-EFOV).The results are provided in Figure 2. The method is capable of restoring anatomical con-sistent body sections in the truncated region and correct the measurement error of SAT.However, the correction accuracy is inferior to the conditionally trained counterpart.(A) With FOV truncation(B) Corrected by RePaint-DDIM(C) Corrected by S-EFOVFigure 2: Evaluation of the anatomical consistency of the field-of-view (FOV) completionresults. Tissue truncation index (TCI) reflects the severity of synthetic FOV truncation.3Xu Krishnan Li Huo Sandler Maldonado LandmanReferencesBen Fei, Zhaoyang Lyu, Liang Pan, Junzhe Zhang, Weidong Yang, Tianyue Luo, Bo Zhang,and Bo Dai. Generative diffusion prior for unified image restoration and enhancement. 42023. URL http://arxiv.org/abs/2304.01247 .Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Ad-vances in Neural Information Processing Systems , 33:6840–6851, 2020.Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and LucVan Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceed-ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages11461–11471, 2022.Can Luo, James G Terry, Yucheng Tang, Kaiwen Xu, Pierre Massion, Bennett Landman,J Jeffery Carr, and Yuankai Huo. Measure partial liver volumetric variations from pairedinspiratory-expiratory chest ct scans. In Medical Imaging 2021: Image Processing , volume11596, pages 873–880. SPIE, 2021.Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit mod-els. In International Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=St1giarCHLP .Amelie S. Troschel, Fabian M. Troschel, Till D. Best, Henning A. Gaissert, Martin Torriani,Ashok Muniappan, Emily E. Van Seventer, Ryan D. Nipp, Eric J. Roeland, Jennifer S.Temel, and Florian J. Fintelmann. Computed tomography-based body composition anal-ysis and its role in lung cancer care. Journal of Thoracic Imaging , 35:91–100, 2020. ISSN15360237. doi: 10.1097/RTI.0000000000000428.Kaiwen Xu, Riqiang Gao, Mirza S Khan, Shunxing Bao, Yucheng Tang, Steve A Deppen,Yuankai Huo, Kim L Sandler, Pierre P Massion, Mattias P Heinrich, et al. Developmentand characterization of a chest ct atlas. In Proceedings of SPIE–the International Societyfor Optical Engineering , volume 2021. NIH Public Access, 2021.Kaiwen Xu, Riqiang Gao, Yucheng Tang, Steve A Deppen, Kim L Sandler, Michael NKammer, Sanja L Antic, Fabien Maldonado, Yuankai Huo, Mirza S Khan, et al. Extendingthe value of routine lung screening ct with quantitative body composition assessment. InMedical Imaging 2022: Image Processing , volume 12032, pages 430–437. SPIE, 2022a.Kaiwen Xu, Thomas Li, Mirza S Khan, Riqiang Gao, Sanja L Antic, Yuankai Huo, Kim LSandler, Fabien Maldonado, and Bennett A Landman. Body composition assessment withlimited field-of-view computed tomography: A semantic image extension perspective.2022b. URL https://arxiv.org/abs/2207.06551 .4 |
upQogJXuhQ | Medical Imaging with Deep Learning 2023Active learning for medical image segmentation withstochastic batchesM ́ elanie Gaillochet1melanie.gaillochet.1@ens.etsmtl.caChristian Desrosiers1christian.desrosiers@etsmtl.caHerv ́ e Lombaert1herve.lombaert@etsmtl.ca1ETS Montr ́ ealAbstractActive learning (AL) selects informative samples for annotation. This is becoming increas-ingly crucial to medical image segmentation since image annotation is hardly scalable to fullpixel-level labeling of large datasets. However, most research focuses on classification or nat-ural image segmentation. Uncertainty-based AL methods tend to have sub-optimal batch-query strategies, and diversity-based methods are computationally expensive. This workimproves uncertainty-based AL for medical image segmentation using stochastic batchesduring sampling, computing uncertainty at the batch-level. Experiments on MRI prostateimaging show this approach’s effectiveness and robustness under various conditions.Keywords: Active learning, Segmentation, Medical image analysis, Uncertainty.1. IntroductionThe performance of deep learning-based segmentation algorithms relies on annotated train-ing data. However, manual annotation is laborious and costly, particularly in the medicaldomain. To solve this constraint, active learning (AL) (Settles, 2009) identifies the mostvaluable samples to annotate, maximizing model performance with minimal labelled data.AL methods applied to deep learning-based models (deep AL) include uncertainty-based,representative-based, and hybrid approaches. Uncertainty-based strategies (Wang et al.,2017; Beluch et al., 2018; Kirsch et al., 2019; Yoo and Kweon, 2019) target samples forwhich the current model is least confident, but may focus on outliers. Representative-based strategies (Sener and Savarese, 2018; Sinha et al., 2019) and hybrid AL approaches(Ash et al., 2020; Kim et al., 2021; Nath et al., 2021) ensure diversity but struggle withhigh dimensionality, often limiting their application to classification tasks. Furthermore,existing work on pixel-wise annotations primarily targets natural image segmentation. Toour knowledge, few deep AL strategies have explored medical image segmentation, and theythey are often computationally expensive and difficult to scale (Yang et al., 2017; Nath et al.,2021). There hence remains a gap between active learning and medical image segmentation.An increasing number of AL studies recognize the difficulty in outperforming randomsampling (Mittal et al., 2019). Gains from AL strategies over random sampling are ofteninconsistent across various experimental setups. In addition, the sensitivity of existingAL methods to factors such as training hyperparameters or regularization often means thatobserved improvements can be easily voided (Munjal et al., 2022). These challenges increasethe difficulaty of applying AL to medical image segmentation.This paper tackles AL limitations by incorporating randomness in uncertainty-basedbatch sampling to improve medical imaging segmentation. We propose to use stochasticbatch (SB) querying alongside existing uncertainty-based AL strategies (see Figure 1). Ournovel approach provides several benefits:©2023 CC-BY 4.0, M. Gaillochet, C. Desrosiers & H. Lombaert.Gaillochet Desrosiers Lombaert1. a sampling strategy that can tackle medical image segmentation ;2. aflexible framework compatible with any uncertainty-based AL strategy; and3. ascalable method ensuring diverse sample selection.Figure 1: Stochastic batch AL for uncertainty-based sampling . The diversity fromrandom sampling is combined with the informativeness of uncertainty-based sampling.2. MethodWe initially train a segmentation model fθ(·) on an randomly sampled labeled set DL={(x(i), y(i))}Ni=1. After the first training cycle, we select Bcandidate samples from theunlabeled set DU={x(j)u}Mj=1, which we annotate and add to the labeled training set DL.This process is repeated until the annotation budget is exhausted.Our AL method uses stochastic batches in two stages for guided sampling diversity.First we generate Qbatches, each with Bsamples selected uniformly at random from DU:Batch(i)={x(i1)u, x(i2)u, ..., x(iB)u} ∼Uniform (Du, B).Then, for each generated batch, we assign an uncertainty score to each unlabeled sample itcontains, according to the current model fˆθand the chosen uncertainty metric ( munc), andwe compute the mean uscore for the batch:∀k= 1, ..., B :ux(ik)uscore =munc(fˆθ, x(ik)u),uBatch(i)score =1BBXk=1ux(ik)uscore.The batch with the highest mean score yields the annotation candidates.3. ResultsWe validate our method on the PROMISE 2012 dataset (Litjens et al., 2014), with 248 testimages from 10 patients. Initially, |DL|=10,|DU|=1010. After each AL cycle, B=10 newsamples are selected via AL, annotated and added to DL. The model, a UNet (Ronnebergeret al., 2015), is then retrained from scratch. Each experiment is repeated 5 times.2Short TitleTable 1: Improvements with Stochastic Batches over varying initial labelled sets.Mean model performance over all AL cycles. Adding stochastic batches provides an im-provement at a statistically significant level (indicated by *).RSEntropy Dropoutw/oSB w/ SB (ours) w/oSB w/ SB (ours)3D DSC 68.8 (±16.0) 67.0 (±16.7) 71.3* (±17.4) 67.7 (±17.2) 72.6* (±15.0)2D DSC 67.9 (±8.3) 66.9 (±8.6) 69.0* (±9.0) 67.1 (±9.5) 69.6* (±8.1)3D HD95 7.0 (±3.7) 7.0 (±4.3) 6.7* (±3.1) 7.0 (±5.0) 6.6* (±3.2)(a)Improvement for Learning loss (Yoo and Kweon, 2019) (b)Improvement for TTA (Gaillochet et al., 2022)Figure 2: Improvements with Stochastic Batches over varying hyperparameters .Stochastic batches improve the model performance of purely uncertainty-based AL strate-gies, regardless of the initial labelled set.We set Q=|DU|Band compare our stochastic batch strategy with random sampling (RS) andfour purely uncertainty-based methods: Entropy-based sampling (Shannon, 1948), Dropout-based sampling (Gal and Ghahramani, 2016), Learning Loss (Yoo and Kweon, 2019) andTTA-based sampling (Gaillochet et al., 2022).Table 1 depicts the average results over 5 different initial labelled sets, across all ALcycles. Our stochastic batch selection strategy improves purely uncertainty-based selectionat a statistcally significant level even when we vary the initial labelled set. Figure 2 shows themean model performance over trainings with different hyperparameters, across AL cycles.Adopting stochastic batches during sampling yields a significant boost in terms of 3D dicescore. This jump in performance (orange to blue) is particularly notable during the firstfour AL cycles, reaching above the random sampling baseline (dotted grey).4. ConclusionActive learning is becoming increasingly crucial in medical image segmentation since an-notating full large datasets may be unrealistic with clinical time constraints. This papertackles three key limitations of AL strategies: the scarcity of AL work in medical imagesegmentation, the tendency of uncertainty-based batch sampling to select similar samples,and the computational burden of diversity-based methods. Our approach computes un-certainty at the batch level with randomly generated sample batches, offering a simple,computationally-efficient way to improve AL candidate selection and model performance.Our method proves effective for the complex task of medical image segmentation, improvinguncertainty-based AL strategy while being robust to variations in training settings.3Gaillochet Desrosiers LombaertReferencesJordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agar-wal. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. InEighth International Conference on Learning Representations (ICLR) , 2020.W. H. Beluch, T. Genewein, A. Nurnberger, and J. M. Kohler. The Power of Ensembles forActive Learning in Image Classification. In IEEE Conference on Computer Vision andPattern Recognition (CVPR) , pages 9368–9377, 2018.M ́ elanie Gaillochet, Christian Desrosiers, and Herv ́ e Lombaert. Taal: Test-time augmenta-tion for active learning in medical image segmentation. In Data Augmentation, Labeling,and Imperfections (MICCAI DALI) , LNCS, pages 43–53, 2022.Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: representingmodel uncertainty in deep learning. In Proceedings of the 33rd International Conferenceon International Conference on Machine Learning (ICML) , volume 48 of ICML’16 , pages1050–1059, 2016.Kwanyoung Kim, Dongwon Park, Kwang In Kim, and Se Young Chun. Task-Aware Varia-tional Adversarial Active Learning. In IEEE Conference on Computer Vision and PatternRecognition (CVPR) , pages 8162–8171, 2021.Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. BatchBALD: Efficient and DiverseBatch Acquisition for Deep Bayesian Active Learning. In Advances in Neural InformationProcessing Systems (NeurIPS) , volume 32, 2019.Geert Litjens, Robert Toth, Wendy van de Ven, Caroline Hoeks, Sjoerd Kerkstra, Bramvan Ginneken, Graham Vincent, Gwenael Guillard, Neil Birbeck, Jindang Zhang, RobinStrand, Filip Malmberg, Yangming Ou, Christos Davatzikos, Matthias Kirschner, FlorianJung, Jing Yuan, Wu Qiu, Qinquan Gao, Philip ̈Eddie ̈Edwards, Bianca Maan, Ferdinandvan der Heijden, Soumya Ghose, Jhimli Mitra, Jason Dowling, Dean Barratt, HenkjanHuisman, and Anant Madabhushi. Evaluation of prostate segmentation algorithms forMRI: The PROMISE12 challenge. Medical Image Analysis , 18(2):359–373, 2014.Sudhanshu Mittal, Maxim Tatarchenko, Ozg ̈ un C ̧i ̧ cek, and Thomas Brox. Parting withIllusions about Deep Active Learning. arXiv:1912.05361 , 2019.Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. To-wards Robust and Reproducible Active Learning Using Neural Networks. In IEEE Con-ference on Computer Vision and Pattern Recognition (CVPR) , 2022.Vishwesh Nath, Dong Yang, Bennett A. Landman, Daguang Xu, and Holger R. Roth.Diminishing Uncertainty Within the Training Pool: Active Learning for Medical ImageSegmentation. IEEE Transactions on Medical Imaging , 40(10):2534–2547, 2021.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks forBiomedical Image Segmentation. In Medical Image Computing and Computer-AssistedIntervention (MICCAI) , Lecture Notes in Computer Science, pages 234–241. Springer,2015.4Short TitleOzan Sener and Silvio Savarese. Active Learning for Convolutional Neural Networks: ACore-Set Approach. In International Conference on Learning Representations (ICLR) ,2018.Burr Settles. Active Learning Literature Survey. Technical Report, University of Wisconsin-Madison Department of Computer Sciences, 2009.C. E. Shannon. A Mathematical Theory of Communication. Bell System Technical Journal ,27(3):379–423, 1948.Samrath Sinha, Sayna Ebrahimi, and Trevor Darrell. Variational Adversarial Active Learn-ing. In IEEE International Conference on Computer Vision (ICCV) , pages 5971–5980,Seoul, Korea (South), 2019.Keze Wang, Dongyu Zhang, Ya Li, Ruimao Zhang, and Liang Lin. Cost-Effective ActiveLearning for Deep Image Classification. IEEE Transactions on Circuits and Systems forVideo Technology , 27:2591–2600, 2017.Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z. Chen. SuggestiveAnnotation: A Deep Active Learning Framework for Biomedical Image Segmentation. InMedical Image Computing and Computer-Assisted Intervention (MICCAI) , volume 10435ofLecture Notes in Computer Science , pages 399–407. Springer, 2017.Donggeun Yoo and In So Kweon. Learning Loss for Active Learning. In IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR) , pages 93–102, 2019.5 |
tSokLyjvW5 | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Neural Operator Learning for Ultrasound TomographyInversionHaocheng Dai∗1hdai@sci.utah.eduMichael Penwarden∗1mpenwarden@sci.utah.eduRobert M. Kirby1kirby@sci.utah.eduSarang Joshi1sjoshi@sci.utah.edu1Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UTAbstractNeural operator learning as a means of mapping between complex function spaces hasgarnered significant attention in the field of computational science and engineering (CS&E).In this paper, we apply neural operator learning to the time-of-flight ultrasound computedtomography (USCT) problem. We learn the mapping between time-of-flight (TOF) dataand the heterogeneous sound speed field using a full-wave solver to generate the trainingdata. This novel application of operator learning circumnavigates the need to solve thecomputationally intensive iterative inverse problem. The operator learns the non-linearmapping offline and predicts the heterogeneous sound field with a single forward passthrough the model. This is the first time operator learning has been used for ultrasoundtomography and is the first step in potential real-time predictions of soft tissue distributionfor tumor identification in beast imaging. The project is available at https://github.com/aarentai/Ultrasound-Tfno-MIDL .Keywords: Neural Operator, Ultrasound Tomography, Inversion, Breast Imaging1. IntroductionThe goal of ultrasound tomography is to estimate the acoustics properties of an object usingthe transmission of sound waves (Javaherian and Cox, 2021). In this paper, we focus onthe breast imaging problem to serve as a test case for applying neural operator learning toultrasound tomography. Ultrasound tomography to obtain cross-sectional images of breaststructures is a promising non-ionizing imaging modality for the diagnosis and screeningof breast cancer (Guo et al., 2018). Numerous detector geometries have been proposedfor speed-of-sound ultrasound computed tomography (Malik et al., 2018). Although themethodology proposed is applicable to all the geometries, we focus on the ring transducergeometry (Sandhu et al., 2015).Neural operators learn mappings between infinite-dimensional function spaces that arediscretization-invariant and have universal approximation properties (Kovachki et al., 2021).In particular, Fourier neural operators (FNOs) have been shown to have high accuracy, lowinference time, and robustness to noise and generalization error when learning the solutionoperators of partial differential equations (PDEs) (Li et al., 2020). In this paper, we exploitthe fact that the method learns general function-to-function maps, not limited to PDEsolutions, and apply it to ultrasound tomography inversion.∗Contributed equally. H. Dai and S. Joshi were supported by NSF grant DMS-1912030 and NIH grant1R01CA259686. M. Penwarden and R. M. Kirby were supported by AFOSR MURI grant FA9550-20-1-0358.©2023 CC-BY 4.0, H. Dai, M. Penwarden, R.M. Kirby & S. Joshi.Dai Penwarden Kirby Joshi2. MethodologyWe utilize a tensorized Fourier neural operator (T-FNO) to learn the mapping between the2D emitter by receiver time-of-flight (TOF) field and the spatial 2D sound speed (SS) field.The T-FNO model features 7.3 million learnable parameters, with 64 modes, 32 hiddenchannels, and 32 projection channels. We provide comparisons with a standard U-Net(Ronneberger et al., 2015), highlighting improved generalization predictive capabilities inthe output space with the T-FNO. The U-Net lifts the input field to 32 hidden channelsand has 7.7 million parameters – 0.4 million more than the T-FNO. Hyperparameter tuningwas performed to determine model sizes with sufficient expressivity without being over-parameterized. Adam optimization was performed for 10,000 epochs, which took 1 to 2hours on a single Nvidia Titan RTX, with an initial learning rate of 1 ×10−3. Additionally,aReduceLROnPlateau scheduler was adopted with a 0 .5 learning rate reduction factor.To generate synthetic training and testing samples, Gaussian random fields (GRFs)were used to emulate the variations in soft tissue and binned into binary values (1450, 1550m/s). The outside of the desired synthetic breast tissue region is replaced with water-likeSS values (1500 m/s) and “skin” (1580 m/s) is added around the breast tissue. This resultsin randomly generated fields, as shown in the second column of Figure 1, which is idealfor synthetically creating a large dataset. The fidelity of both input and output fields is128×128.Ak-space pseudo-spectral method (k-Wave) MATLAB package (Treeby and Cox, 2010)was used to run a full-wave numerical forward simulation over the randomly generatedheterogeneous SS fields and a homogeneous density field, given an equally distributed set of128 emitter locations and 128 receivers. We used a Daubechies 8 wavelet as the time-varyingexcitation pulse for all emitters, emulating a physical transducer. The TOF was determinedusing cross-correlation between the emitter and receiver signals. The discrepancy is takenbetween the TOF of the breast inside a water bath and only water, resulting in fields thathighly correlate to the variation in SS fields. The TOF discrepancy and SS fields were thenmin-max normalized before training, and an 80:20 train-test split was used.3. ResultsTable 1: Comparison between T-FNO and U-Net on the full dataset inversion problemModel GRF Correlation Noise Testing MSE Training MSET-FNOHighClean 2.01±0.33×10−20.69±0.11×10−210% 2.06±0.34×10−20.68±0.11×10−2LowClean 3.27±0.16×10−21.00±0.06×10−210% 2.67±0.17×10−21.53±0.08×10−2U-NetHighClean 2.52±0.44×10−20.12±0.03×10−210% 2.79±0.42×10−20.01±0.01×10−2LowClean 3.81±0.17×10−20.42±0.05×10−210% 4.02±0.21×10−20.02±0.01×10−3In Table 1, the mean squared error (MSE) over the full dataset is reported for a varietyof setups. White noise was added to the receiver time series prior to cross-correlation toassess robustness. We observe that the T-FNO outperforms the U-Net at test time under2Ultrasound Operator Learningall conditions, whereas the U-Net better fits the training set but does not generalize well.A single inference with the T-FNO takes 1 .44 seconds compared to 1 .57 with the U-Net.Higher correlation fields prove easier to infer for both models, although they incur greatervariance in the errors.TrainingTOF Ground Truth SS T-FNO Prediction U-Net PredictionT esting Training T estingLow Correlation Data High Correlation Data0.00.20.40.60.81.00 10000102101T-FNO0 10000104103102101U-Net0 10000102101T-FNO0 10000104103102101U-NetLow Correlation Data High Correlation Data010000104103102101MSE lossU-Net on low correlation dataSmall training set Small testing set Big training set Big testing set010000102101T-FNO on low correlation dataFigure 1: Single realization of train/test set examples. The dashed red line indicates thelocation of the evenly distributed emitters and receivers. The region outsidethe transducer ring is masked out for improved training and quantity of interestcomparison but is present in the full-wave simulation to mitigate reflection.In Figure 1, the respective models were trained to learn the mapping between the TOFdiscrepancy (first column) and the ground truth SS (second column) with resulting SS pre-dictions (third and fourth column) for one realization in the dataset. The loss convergenceplots are shown for differing dataset sizes of 100 (small dataset) and 200 (big dataset),respectively (fifth column). Empirically, we observe that the T-FNO better captures theoverall trends in the data, while the U-Net is prone to overfit the training data. Thisis also shown in the loss convergence plots, in which the U-Net suffers from considerablegeneralization error, even when additional examples were provided.4. ConclusionWe have proposed using neural operators to accurately and efficiently solve the full-waveinverse problem on synthetic ultrasound tomography. Our novel application of the T-FNOimproves over the baseline U-Net, laying the foundation for real-time accurate predictionsof soft tissue distribution for tumor identification on breast imaging. Additionally, theapplication of both U-Net and T-FNO to this problem formulation is itself novel since bothare real-time predictors and do not require computationally expensive ray-based inversiononce trained. Future work will explore solving the inverse Helmholtz problem instead ofthe full-wave solution for sound speed reconstruction as well as training and testing onnon-synthetic real breast image data.3Dai Penwarden Kirby JoshiReferencesRongrong Guo, Guolan Lu, Binjie Qin, and Baowei Fei. Ultrasound imaging technologiesfor breast cancer detection and management: a review. Ultrasound in medicine & biology ,44(1):37–70, 2018.Ashkan Javaherian and Ben Cox. Ray-based inversion accounting for scattering for biomed-ical ultrasound tomography. Inverse Problems , 37(11):115003, 2021.Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya,Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps betweenfunction spaces. arXiv preprint arXiv:2108.08481 , 2021.Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya,Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partialdifferential equations. arXiv preprint arXiv:2010.08895 , 2020.Bilal Malik, Robin Terry, James Wiskin, and Mark Lenox. Quantitative transmission ul-trasound tomography: imaging and performance characteristics. Medical Physics , 45, 052018. doi: 10.1002/mp.12957.Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks forbiomedical image segmentation. In Medical Image Computing and Computer-AssistedIntervention–MICCAI 2015: 18th International Conference, Munich, Germany, October5-9, 2015, Proceedings, Part III 18 , pages 234–241. Springer, 2015.GY Sandhu, C Li, O Roy, S Schmidt, and N Duric. Frequency domain ultrasound waveformtomography: breast imaging using a ring transducer. Physics in Medicine & Biology , 60(14):5381, 2015.BE Treeby and BT Cox. k-wave: Matlab toolbox for the simulation and reconstruction ofphotoacoustic wave fields. Journal of Biomedical Optics 15 , 2010.4 |
iYbJn-wIvf | Medical Imaging with Deep Learning 2023Deep model-based optoacoustic image reconstruction(DeepMB)Christoph Dehner∗;1;2christoph.dehner@ithera-medical.comGuillaume Zahnd∗;1;2guillaume.zahnd@ithera-medical.comVasilis Ntziachristos†;1;3;4bioimaging.translatum@tum.deDominik J ustel†;1;3;5dominik.justel@helmholtz-munich.de1Institute of Biological and Medical Imaging, Helmholtz Zentrum M unchen, Neuherberg, Germany2iThera Medical GmbH, Munich, Germany3Chair of Biological Imaging at the Central Institute for Translational Cancer Research (Transla-TUM), School of Medicine, Technical University of Munich, Germany4Munich Institute of Robotics and Machine Intelligence, Technical University of Munich, Germany5Institute of Computational Biology, Helmholtz Zentrum M unchen, Neuherberg, GermanyAbstractMultispectral optoacoustic tomography requires image feedback in real-time to locate andidentify relevant tissue structures during clinical interventions. Backprojection methods arecommonly used for optoacoustic image reconstruction in real-time but only aord impreciseimages due to oversimplied modelling assumptions. Herein, we present a deep learningframework, termed DeepMB, that infers optoacoustic images with state-of-the-art qualityin 31 ms per image.Keywords: Multispectral Optoacoustic Tomography (MSOT), Model-based reconstruc-tion, Inverse problems, Real-time imaging, Synthesized training data1. IntroductionMultispectral optoacoustic tomography (MSOT) can non-invasively detect optical contrastin living tissue with high spatial resolution and centimeter-scale penetration depth. Sim-ilar to ultrasound imaging, clinical use of MSOT requires image feedback in real-time tolocate and identify relevant tissue structures. Backprojection methods (Xu and Wang,2005) can reconstruct optoacoustic images in real-time but only deliver imprecise imagesdue to oversimplied modelling assumptions. On the other hand, iterative model-basedreconstruction (Chowdhury et al., 2020, 2021) delivers state-of-the-art optoacoustic imagequality. However, the required computational eort and the iterative approach of the al-gorithm prevent it from being used for real-time imaging. Deep learning enables fasterimage reconstruction using deep neural network models that support ecient and GPU-accelerated inference, however the lack of experimental ground truth training data can leadto reduced image quality for in vivo data (Kim et al., 2020; Hauptmann and Cox, 2020;Gr ohl et al., 2021).∗Contributed equally†Shared corresponding authorship©2023 CC-BY 4.0, C. Dehner, G. Zahnd, V. Ntziachristos & D. J ustel.DeepMBHerein, we show that learning a well-posed reconstruction operator facilitates accurategeneralization from synthesized training data to experimental test data. We present a deeplearning framework, termed DeepMB, that infers optoacoustic images with quality nearly-indistinguishable from state-of-the-art model-based reconstructions at speeds enabling liveimaging (31 ms per image). DeepMB facilitates accurate model-based reconstruction forarbitrary experimental input data through training on optoacoustic signals synthesized fromreal-world images, while using as ground truth for the rst time the optoacoustic imagesgenerated by model-based reconstruction of the corresponding signals.2. MethodsFigure 1 illustrates the training and evaluation process of DeepMB. Input sinograms for net-work training were obtained by utilizing general-feature images (Everingham et al., 2009)as initial pressure distributions and simulating thereof the signals recorded by the acoustictransducers with an accurate physical model of the scanner (Fig. 1a). In vivo sinograms forevaluating the performance of the trained network were acquired by scanning six partici-pants at up to eight anatomical locations each (Fig. 1b). Ground truth images for both thesynthetic training sinograms and the in vivo test sinograms were generated using model-based reconstruction (Fig. 1c). The deep neural network used for DeepMB consists of adelay operation, followed by trainable (U-net-like) convolutional layers (Fig. 1d). The net-work was trained end-to-end on synthesized input sinograms and corresponding model-basedreference images for 300 epochs using stochastic gradient descent.Figure 1: Training and evaluation process of DeepMB.2DeepMB3. ResultsDeepMB infers optoacoustic images in 31 ms per sample using a recent graphics processingunit (NVIDIA GeForce RTX 3090). The performance of DeepMB was evaluated using4814 in vivo sinograms that were acquired with a modern clinical optoacoustic scanner(MSOT Acuity Echo, iThera Medical GmbH, Munich, Germany). Figure 2 shows theoptoacoustic images from model-based, DeepMB, and backprojection reconstruction for ascan of a human carotid. DeepMB images are systematically nearly-indistinguishable frommodel-based references. In contrast, backprojection images suer from reduced spatialresolution and physically-nonsensical negative initial pressure values.Figure 2: Optoacoustic images from model-based, DeepMB and backprojection reconstruc-tion for a scan of human carotid at 800 nm.Table 1 summarizes a quantitative comparison of model-based, DeepMB, and back-projection images. The obtained metrics conrm that the image quality of DeepMB iscomparable to model-based reconstruction and superior to backprojection reconstruction.Table 1: Quantitative evaluation of the image quality for all 4814 in vivo sinograms fromthe test dataset (mean value, [25thand 75thpercentiles]).Reference method Our method Traditional methodModel-based DeepMB BackprojectionData residual norm ( #) 0.139 [0.068, 0.180] 0.156 [0.092, 0.189] 0.369 [0.294, 0.428]Mean square error ( #) n/a 9.45 [0.56, 2.41] 84.98 [24.97, 85.20]Structural similarity ( ") n/a 0.98 [0.98, 0.99] 0.73 [0.68, 0.79]4. ConclusionDeepMB can enable state-of-the-art MSOT imaging in clinical applications that requirereal-time image feedback. The source code of DeepMB is available on GitHub1, and furtherdetails are described in our arXiv preprint (Dehner et al., 2022). We are currently workingon integrating DeepMB into the hardware of a next-generation MSOT scanner.1. https://github.com/juestellab/deepmb3DeepMBReferencesK. B. Chowdhury, J. Prakash, A. Karlas, D. J ustel, and V. Ntziachristos. A synthetictotal impulse response characterization method for correction of hand-held optoacousticimages. IEEE Transactions on Medical Imaging , 39(10):3218{3230, 2020.K. B. Chowdhury, M. Bader, C. Dehner, D. J ustel, and V. Ntziachristos. Individual trans-ducer impulse response characterization method to improve image quality of array-basedhandheld optoacoustic tomography. Optics Letters , 46(1):1{4, 2021.C. Dehner, G. Zahnd, V. Ntziachristos, and D. J ustel. DeepMB: Deep neural network forreal-time model-based optoacoustic image reconstruction with adjustable speed of sound.arXiv preprint arXiv:2206.14485 , 2022.M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PascalVisual Object Classes (VOC) Challenge. International Journal of Computer Vision , 88:303{308, 2009.J. Gr ohl, M. Schellenberg, K. Dreher, and L. Maier-Hein. Deep learning for biomedicalphotoacoustic imaging: A review. Photoacoustics , 22:100241, 2021. ISSN 2213-5979.A. Hauptmann and B. T. Cox. Deep learning in photoacoustic tomography: current ap-proaches and future directions. Journal of Biomedical Optics , 25(11):112903, 2020.M. W. Kim, G. S. Jeng, I. Pelivanov, and M. O'Donnell. Deep-learning image reconstructionfor real-time photoacoustic system. IEEE Transactions on Medical Imaging , 39(11):3379{3390, 2020.M. Xu and L. V. Wang. Universal back-projection algorithm for photoacoustic computedtomography. Physical Review E { Statistical, Nonlinear, and Soft Matter Physics , 71(1):016706, 2005.4 |
7_jig8Y3pt | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Benchmark and Boosted Segmentation on Dental PanoramicRadiographsKaiwei Sun1kaiweis2@illinois.eduYang Feng2fengyang@angelalign.comZuozhu Liu∗1zuozhuliu@intl.zju.edu.cn1Zhejiang University, China2Angelalign Inc., ChinaEditors: Accepted for publication at MIDL 2023AbstractPanoramic radiographs, also known as orthopantomograms, are commonly utilized by den-tists to gain a comprehensive understanding of the patient’s oral health and perform or-thodontic procedures. However, due to physician burnout and time constraints, manydentists may use them hastily which could result in medical negligence. To streamline theworkflow for dentists, we establish a mission to segment five oral structures on panoramicradiographs, namely Alveolarcrest, Condyle, Neuraltube, Sinusmaxillaris and Teeth. ACascaded Multi-scale Mask2former(CMMask2former) method is proposed for this task.For small objects, we design a multi-scale masked attention specifically for the mask area.The entire structure is designed in a two-stage cascade for localization and prediction. Ourresults demonstrate superior predictive performance compared to other methods.Keywords: Panoramic radiographs, Orthodontic, Segmentation, Mask2former.1. IntroductionPhysician burnout is a significant concern that affects all dentists due to the overwhelm-ing patient data for dentists to review and complex workflows.(Yates, 2020) Panoramicradiographs, as a way to visualize patients’ oral condition; most dentists’ use tends to beimprecise. The quality of care is often sub-optimal, and some preventable medical errorsmay happen. Meanwhile, patients are also dissatisfied with limited interactions and atten-tion from doctors during their short clinical visits. Segmentation models for the analysisof Panoramic radiographs data is a challenging task that helps to uncover different oralstructures which can reduce the workflow of dentists and give both dentists and patientsa clearer visualization of their oral condition for orthodontic treatment plans.(Wu et al.,2018) In this context, we propose a mission to segment on the panoramic radiographs forfive oral structures: Alveolarcrest, Condyle, Neuraltube, Sinusmaxillaris, and Teeth. Con-sidering the Independence between teeth and overlap area with other structures, an instancesegmentation is applied for the teeth class.(Lee et al., 2020) For the other four structures,we detect them with a semantic segmentation way.(Koch et al., 2019) Recently, a new deeplearning paradigm that is compatible with both two segmentation methods was introduced:Masked-attention Mask Transformer (Mask2former)(Cheng et al., 2022), which could be∗Corresponding author©2023 CC-BY 4.0, K. Sun, Y. Feng & Z. Liu.Sun Feng Liuapplied to this mission. It is a transformer-based encoder-decoder framework for UniversalImage Segmentation. The semantic segmentation for Alveolarcrest, Condyle, Neuraltube,and Sinusmaxillaris is still a challenging task, especially for Neuraltube which is slender. Toenhance the performance for small objects in Neuraltube, we specially designed a Cascadedstructure based on Mask2former with Multi-scale Masked attention for its segmentation.The proposed framework Cascaded Multi-scale Mask2former(CMMask2former) offers state-of-the-art performance in the segmentation of panoramic radiographs.2. MethodsFigure 1: CMMask2former Framework: Two stages of Multi-scale Masked attention Mask-former (MMask2former) in a cascaded way.The schematic overview of our CMMask2former framework is shown in Figure 1. Ourframework is constructed based on Mask2former(Cheng et al., 2022). For the slender andsmall characteristics of Neuraltube, it may be challenging to learn the features of the objectand find the boundary distinction, hence we first design a Multi-scale Masked attention tolocalize the object region and import more surrounding information of the object. Basedon the Masked attention(Cheng et al., 2022), we adjust the masked attention map to thebounding box of predicted area and use all of the previous masked attention maps togenerate the attention map for the next layer. Our Multi-scale masked attention modulatesthe attention matrix via:Xl=softmax (1ll−1Xi=0BOX (Mi) +QlKTl)Vl+Xl−1. (1)Here, l is the layer index, Xl∈RNCrefers to NC-dimensional query features at the lthlayer and Ql.Ql∈RNCis the query feature. Kl, Vl∈RHlWlCare the image featuresrespectively, and HlandWlare the spatial resolution of image features. Ml−1∈ {0,1}NH lWlis the binarized prediction output (thresholded at 0.5) from the previous layer. BOX is thefunction to fill the prediction of binary mask Mito a shape of its bounding box area.Overall, we employ a cascaded structure consisting of two Mask2formers with Multi-scale masked attention. The initial MMask2former is utilized for object region localization.Following a 10% expansion of its width and height, with the center fixed, we extract the2Benchmark and Boosted Segmentation on Dental Panoramic Radiographspredicted region from the original image. The resulting cropped image is then forwardedto the subsequent MMask2former for further prediction3. Experiments and conclusionData Set We analyze Panoramic radiographs from 466 patients who received orthodonticsservice from Angelalign. The dataset is randomly separated into three data sets for train-ing, validation, and testing with a ratio of 7:1:2. All patients’ Panoramic radiographs aremanually annotated with five classes including Alveolarcrest, Condyle, Neuraltube, Sinus-maxillaris, and Teeth (each tooth is independent).Semantic segmentation We evaluate the performance of CMMask2former against state-of-the-art models on our dataset, as presented in Table 1. The comparation models includeUnet++(Zhou et al., 2020), MedT(Valanarasu et al., 2021), UCTransNet(Wang et al.,2022) and Mask2former(Cheng et al., 2022). Additionally, we present the results of ourmethod without Multi-scale masked attention(MMA) or Cascaded structure(CS) over CM-Mask2former to demonstrate the effectiveness of our modules.Table 1: Semantic segmentation on our data set with four classesModel IoU Alveolarcrest IoU Condyle IoU Neuraltube IoU Sinusmaxillaris mIoUUnet++ 72.731 29.732 17.036 48.754 42.063MedT 88.089 72.885 35.316 79.493 68.946UCTransNet 91.472 83.236 59.666 85.601 79.994Mask2former 92.566 89.276 62.457 87.575 82.968Ours(w/o CS) 92.840 88.071 64.863 88.021 83.449Ours(w/o MMA) 92.902 89.413 66.144 88.047 84.127Ours 92.654 89.902 67.537 87.589 84.421Instance segmentation We compare CMMask2former with Mask R-CNN(He et al., 2018)on our data set in Table 2. For this part, we do not import our modules, hence, the resultsof Mask2former and our model are close.Table 2: Instance segmentation on our data set with teeth classModel APTeethMask R-CNN 52.034Mask2former 74.345Ours 74.381Conclusion In conclusion, this paper provides a preliminary insight into the segmenta-tion task of five oral structures on Panoramic radiographs. We propose a novel CascadedMulti-scale Mask2former (CMMask2former) method for this challenging segmentation task.Experimental results on our dataset demonstrate the effectiveness of our proposed modules.AcknowledgmentsThis work is supported by the National Natural Science Foundation of China (GrantNo. 62106222), the Natural Science Foundation of Zhejiang Province, China(Grant No.LZ23F020008) and the Zhejiang University-Angelalign Inc. R&D Center for IntelligentHealthcare.3Sun Feng LiuReferencesBowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar.Masked-attention mask transformer for universal image segmentation, 2022.Kaiming He, Georgia Gkioxari, Piotr Doll ́ ar, and Ross Girshick. Mask r-cnn, 2018.Thorbjørn Louring Koch, Mathias Perslev, Christian Igel, and Sami Sebastian Brandt.Accurate segmentation of dental panoramic radiographs with u-nets. In 2019 IEEE 16thInternational Symposium on Biomedical Imaging (ISBI 2019) , pages 15–19, 2019.Jeong-Hee Lee, Sang-Sun Han, Young Hyun Kim, Chena Lee, and Inhyeok Kim. Applicationof a fully deep convolutional neural network to the automation of tooth segmentation onpanoramic radiographs. Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology ,129(6):635–642, 2020.Jeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu, and Vishal M. Patel. Medicaltransformer: Gated axial-attention for medical image segmentation, 2021.Haonan Wang, Peng Cao, Jiaqi Wang, and Osmar R. Zaiane. Uctransnet: Rethinking theskip connections in u-net from a channel-wise perspective with transformer, 2022.Chia-Hsiang Wu, Wan-Hua Tsai, Ying-Hui Chen, Jia-Kuang Liu, and Yung-Nien Sun.Model-based orthodontic assessments for dental panoramic radiographs. IEEE Journalof Biomedical and Health Informatics , 22(2):545–551, March 2018.Scott W. Yates. Physician stress and burnout. The American Journal of Medicine , 133(2):160–164, 2020.Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang.Unet++: Redesigning skip connections to exploit multiscale features in image segmenta-tion, 2020.4 |
4E93Xdg98u | Medical Imaging with Deep Learning 2023Outlier Detection for MammogramsRyan Zurrin ryan.zurrin001@umb.eduNeha Goyal neha.goyal001@umb.eduPablo Bendiksen p.bendiksen001@umb.eduMuskaan Manocha muskaan.manocha001@umb.eduDan Simovici dan.simovici@umb.eduNurit Haspel nurit.haspel@umb.eduMarc Pomplun marc.pomplun@umb.eduDaniel Haehn daniel.haehn@umb.eduAbstractMammograms are vital for detecting breast cancer, the most common cancer among womenin the US. However, low-quality scans and imaging artifacts can compromise their effi-cacy. We introduce an automated pipeline to filter low-quality mammograms from largedatasets. Our initial dataset of 176 ,492 mammograms contained an estimated 5 .5% lowerquality scans with issues like metal coil frames, wire clamps, and breast implants. Man-ually removing these images is tedious and error-prone. Our two-stage process first usesthreshold-based 5-bin histogram filtering to eliminate undesirable images, followed by avariational autoencoder to remove remaining low-quality scans. Our method detects suchscans with an F1 Score of 0 .8862 and preserves 163 ,568 high-quality mammograms. Weprovide results and tools publicly available as open-source software.Keywords: anomaly detection, outlier detection, mammograms, unsupervised learning1. IntroductionBreast cancer, a prevalent cause of death among women (Yusuf et al., 2021; Lei et al., 2021),can be better managed with early detection and advanced machine-learning tools (Lotteret al., 2021). For a robust machine learning classifier, one strategy is to unify the qualityand content of training data by removing low-quality images and outliers (Chandola et al.,2009; Smiti, 2020; Shvetsova et al., 2021). We are building an extensive, publicly availablemammography database from which we began with 967 ,991 mammograms acquired by ourcollaborators. Through data cleaning using metadata such as small dimensions and man-ufacturer, we reduced the number of images to 176 ,492 mammograms, but an estimated5.5% remained low-quality. Manually selecting these images would be infeasible, prompt-ing us to evaluate 26 unsupervised outlier detection algorithms, including traditional anddeep learning-based approaches (Section 2). Based on various experiments, we introduce5-BHIST, a thresholded histogram-binning method paired with a variational autoencoder.This two-stage outlier detection pipeline significantly outperforms other unsupervised ma-chine learning algorithms in detecting low-quality mammograms.2. Experimental SetupTest Datasets .We initially created the five representative test datasets A, B, C, A*, andB* with varying proportions of unwanted images (between 5 and 24%) by randomly sampling©2023 CC-BY 4.0, R. Zurrin et al.Zurrin Goyal Bendiksen Manocha Simovici Haspel Pomplun HaehnFigure 1: Two-stage Outlier Detection. Our method combines a 5-bin histogram fil-tration technique (5-BHIST) with a variational autoencoder (VAE) to automati-cally eliminate undesirable images. We perform experiments on a total of 6 testdatasets from our initial collection of mammograms. With optimized parametersand normalization methods, we reduce the amount of low-quality mammogramsby 83 .15%.100 and 1000 mammograms from our original collection. We manually selected undesiredimages through multiple consensus-driven user studies with 9 participants. After our initialexperiments, we filtered our large collection of mammograms with the best approach (5-BHIST). We then randomly sampled dataset C* for additional testing to identify the optimalalgorithm for the second filtering stage (Figure 1).Normalization .We applied various normalization methods to ensure comparable pixelintensities across different device manufacturers (Patro and Sahu, 2015). Max: Re-scaleintensities between -1 and 1: xscaled =x/max (|x|).Min-Max: Re-scale intensities to thefixed range [0, 1]: xnorm = (xi−xmin)/(xmax−xmin).Gaussian: Introduce a blur: xgaussian =(xgaussian filter(sigma=20) )/xmax.zscore: Standardize across a normal distribution: xgaussian =(xi−μ)/σ.Robust: Scale data using median subtraction and IQR division: xrobust =(xi−μ)/(IQR).Image Features. We utilize image feature descriptors to reduce the number of data pointsper mammogram. Full-intensity histograms , with bin sizes selected automatically based onpixel ranges; Downsampling , which reduces the spatial resolution via stretching (withoutanti-aliasing); Scale-invariant feature transforms (SIFT) , used to create keypoints (Lowe,2004); Oriented FAST and rotated BRIEF (ORB) , similar to SIFT (Rublee et al., 2011).Algorithms .We carried out unsupervised outlier detection on all our test datasets using26 distinct algorithms from the PyOD1software package (Zhao et al., 2019, 2021; Han et al.,2022), with a total number of 340 configured experiments across all tests.Evaluation Metric .To quantify outlier detection success, we measure the F1 Score asF1 = 2 * (precision * recall) / (precision + recall) (Powers, 2011).3. ResultsWe fully tuned the 26 anomaly detection algorithms available in PyOD for comparative anal-ysis and evaluated the best-performing configurations on our representative test datasets(Table 1). Initial results indicated a preference for a specific normalization and featuredescriptor configuration: Histogram binning after Gaussian normalization. We then per-formed ablation studies regarding the number of histogram bins. We compared different1. Python Outlier Detection (PyOD) available at https://github.com/yzhao062/pyod2Outlier Detection for MammogramsTable 1: Outlier Detection Results. Utilizing best-performing normalization and fea-tures (G: Gaussian, M: Max, MM: Min-Max, R: Robust, Z: Z-Score, H: Histogram,S: SIFT, O: ORB), our 5-BHIST method yields the highest average F1 Score of0.8772 on varied test datasets. Incorporating a variational autoencoder (VAE) asa second-stage algorithm elevates this to 0 .8862.Algorithm A (n=100, 8%) B (n=100, 13%) C (n=100, 24%) A* (n=1000, 6.3%) B* (n=1000, 5.0%) C* (n=1000, 1.5%)AE M + H 0.2500 M + S 0.3077 M + S 0 .4917±0.0167 M + H 0.1270 M + H 0.1200 MM + H 0.1391AvgKNN G + H 0.6250 G + H 0.6923 G + H 0.8333 G + H 0.7460 G + H 0.6600 Z + S 0.0522VAE MM + H 0.2500 M + S 0.3077 MM + S 0 .6000±0.0333 MM + H 0.1111 MM + H 0 .0940±0.0092 MM + H 0.1530 ±0.0070SOGAAL M + S 0 .0250±0.0500 G + O 0.0000 M + H 0.0000 M + S 0.0000 M + S 0.0000 M + S 0 .0124±0.0247DeepSVDD G + H 0 .6750±0.0612 G + H 0 .6978±0.0111 G + H 0 .2913±0.2867 G + H 0 .5322±0.1676 G + H 0 .4292±0.0619 MM + H 0 .1009±0.0403AnoGAN G + H 0.0000 M + S 0.0769 M + O 0.2083 G + H 0.0000 G + H 0.0000 Z + S 0.1043HBOS G + H 0.6250 M + H 0.4615 G + H 0.8261 G + H 0.7885 G + H 0.7805 M + H 0.1217LOF MM + S 0 .1750±0.0612 MM + S 0.3077 MM + S 0 .6000±0.0333 MM + S 0 .5095±0.0321 MM + S 0 .6100±0.0257 M + S 0.1391OCSVM G + H 0.0000 G + H 0.0000 G + H 0.0000 G + H 0.0000 G + H 0.0000 G + O 0.0696IForest G + H 0.5000 G + H 0.6154 G + H 0.5833 G + H 0 .6739±0.0328 G + H 0 .6473±0.0101 M + H 0 .1148±0.0260CBLOF G + H 0.6250 G + H 0.6923 G + H 0.8333 G + H 0 .7492±0.0063 G + H 0.0202 Z + S 0 .0452±0.0085COPOD G + H 0.3750 G + H 0.3846 G + H 0.6250 G + H 0.3651 G + H 0.4583 R + H 0.1217SOS M + S 0 .4750±0.0500 M + S 0 .5385±0.0973 MM + S 0 .7167±0.0312 M + S 0 .2159±0.0384 M + S 0 .5240±0.0265 M + S 0.1217KDE G + H 0.0000 G + H 0.0000 G + H 0.0000 G + H 0.0000 G + H 0.0000 M + O 0.0000Sampling G + H 0 .5750±0.0612 G + H 0 .5077±0.0377 G + H 0 .6500±0.1007 G + H 0 .5508±0.2622 G + H 0 .3341±0.3221 Z + S 0 .0417±0.0085PCA G + H 0.3750 G + H 0.4800 G + H 0.5366 G + H 0.3651 G + H 0.4783 MM + H 0.1391LMDD G + H 0.0000 M + S 0 .1692±0.0897 MM + S 0 .2250±0.1225 G + H 0.0000 G + H 0.0000 M + O 0.1217COF G + H 0.6250 MM + S 0.3077 M + S 0.6250 G + H 0.1746 G + H 0.1000 M + S 0.1217ECOD G + H 0.5333 G + H 0.6154 G + H 0.6250 G + H 0.7097 G + H 0.6600 R + H 0.1217KNN G + H 0.6250 G + H 0.6400 G + H 0.8085 G + H 0.7460 G + H 0.6600 M + S 0.0522MedKNN G + H 0.6250 G + H 0.6923 G + H 0.8333 G + H 0.7460 G + H 0.6600 Z + S 0.0522SOD MM + S 0 .3500±0.0935 MM + S 0 .4308±0.0615 MM + S 0 .6167±0.0167 MM + S 0 .2714±0.0404 MM + S 0 .2000±0.0346 MM + S 0.0870INNE M + S 0 .5500±0.0612 MM + S 0 .6308±0.0308 MM + S 0 .7833±0.0312 M + S 0 .3444±0.0471 M + S 0 .4280±0.0431 MM + S 0 .1530±0.0170FB M + S 0.2500 G + H 0.3077 G + H 0.6250 M + S 0 .4476±0.0525 M + S 0 .5900±0.0392 MM + S 0 .1496±0.0085LODA G + H 0 .3800±0.1122 G + H 0 .4017±0.1585 G + H 0.4167 G + H 0 .3312±0.3241 G + H 0 .5019±0.3246 Z + H 0 .0522±0.0156SUOD G + H 0.5000 G + H 0 .5742±0.0444 G + H 0 .6583±0.0408 G + H 0 .6926±0.0104 G + H 0 .6446±0.0079 M + H 0 .0939±0.00855-BHIST G + H 0.8571 G + H 0.8696 G + H 0.9333 G + H 0.8908 G + H 0.8352 N/A N/Abin configurations ( b= 2,5,10), optional Gaussian blur with varying sigma ( σ= 5,10,20),and all normalization techniques with a 2-bin limitation based on previous explorations.Min-max normalization outperformed Gaussian, highlighting bin size as a critical factorfor optimal algorithm performance. However, a 2-bin approach contributed to significantfalse positive classifications. Further ablation studies and consensus-driven inspection con-firmed a setting of 5-bins with a bi-conditional thresholding operation (bins b2<2000 andb5>15,000) for high F1 scores. We report the performance of 5-BHIST in Table 1.Limitations. Our evaluations are based on algorithm tuning from representative mam-mogram subsets and validated by user studies; thus, results are estimates. Future publicaccess to our full mammogram collection will allow broader expert validation.4. ConclusionsWe evaluate 26 unsupervised algorithms for filtering low-quality mammograms in extensivedata collections. Our findings indicate that a combination of min-max normalized histogrambinning paired with a variational autoencoder can detect unwanted images with an averageF1 Score of 0.8862. This reduces the number of unwanted images in our collection by5.93x, from an estimated 9,708 low-quality scans to 1,636. Our final dataset now contains1% unwanted images as validated by manual inspection. All code, data, experiments, andadditional information are available at https://github.com/mpsych/ODM .3Zurrin Goyal Bendiksen Manocha Simovici Haspel Pomplun HaehnReferencesVarun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey.ACM computing surveys (CSUR) , 41(3):1–58, 2009.Songqiao Han, Xiyang Hu, Hailiang Huang, Minqi Jiang, and Yue Zhao. Adbench: Anomalydetection benchmark. Advances in Neural Information Processing Systems , 35:32142–32159, 2022.Shaoyuan Lei, Rongshou Zheng, Siwei Zhang, Shaoming Wang, Ru Chen, Kexin Sun, Hong-mei Zeng, Jiachen Zhou, and Wenqiang Wei. Global patterns of breast cancer incidenceand mortality: A population-based cancer registry data analysis from 2000 to 2020. Can-cer Communications , 41(11):1183–1194, 2021.William Lotter, Abdul Rahman Diab, Bryan Haslam, Jiye G Kim, Giorgia Grisot, Eric Wu,Kevin Wu, Jorge Onieva Onieva, Yun Boyer, Jerrold L Boxerman, et al. Robust breastcancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nature Medicine , 27(2):244–249, 2021.David G Lowe. Distinctive image features from scale-invariant keypoints. Internationaljournal of computer vision , 60(2):91–110, 2004.S Patro and Kishore Kumar Sahu. Normalization: A preprocessing stage. arXiv preprintarXiv:1503.06462 , 2015.D. M. W. Powers. Evaluation: From precision, recall and f-measure to roc., informedness,markedness & correlation. Journal of Machine Learning Technologies , 2(1):37–63, 2011.Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficientalternative to sift or surf. In 2011 International conference on computer vision , pages2564–2571. Ieee, 2011.Nina Shvetsova, Bart Bakker, Irina Fedulova, Heinrich Schulz, and Dmitry V Dylov.Anomaly detection in medical imaging with deep perceptual autoencoders. IEEE Ac-cess, 9:118571–118583, 2021.Abir Smiti. A critical overview of outlier detection methods. Computer Science Review , 38:100306, 2020.AB Yusuf, RM Dima, and SK Aina. Optimized breast cancer classification using featureselection and outliers detection. Journal of the Nigerian Society of Physical Sciences ,pages 298–307, 2021.Yue Zhao, Zain Nasrullah, and Zheng Li. Pyod: A python toolbox for scalable outlierdetection. Journal of Machine Learning Research , 20(96):1–7, 2019.Yue Zhao, Xiyang Hu, Cheng Cheng, Cong Wang, Changlin Wan, Wen Wang, Jianing Yang,Haoping Bai, Zheng Li, Cao Xiao, et al. Suod: Accelerating large-scale unsupervisedheterogeneous outlier detection. Proceedings of Machine Learning and Systems , 3:463–478, 2021.4 |
C7VKeiHeZT | Medical Imaging with Deep Learning 2023Equivariant and Denoising CNNs to Decouple Intensity andSpatial Features for Motion Tracking in Fetal Brain MRIBenjamin Billot1bbillot@mit.eduDaniel Moyer2daniel.moyer@vanderbilt.eduNeerav Karani1nkarani@csail.mit.eduMalte Hoffmann3mhoffmann@mgh.harvard.eduEsra Abaci Turk4esra.abaciturk@childrens.harvard.eduP. Ellen Grant4ellen.grant@childrens.harvard.eduPolina Golland1polina@csail.mit.edu1Massachusetts Institute of Technology, Cambridge, MA, USA2Vanderbilt University, Nashville, TN, USA3Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA4Boston Children’s Hospital and Harvard Medical School, Boston, MA, USAAbstractEquivariance in convolutional neural networks (CNN) has been a long-sought property, as itwould ensure robustness to expected effects in the data. Convolutional filters are by naturetranslation-equivariant, and rotation-equivariant kernels were proposed recently. Whilethese filters can be paired with learnable weights to form equivariant networks (E-CNN),we show here that such E-CNNs have a limited learning capacity, which makes them fragileagainst even slight changes in intensity distribution. This is a major issue in medical imag-ing where many noise sources can randomly corrupt the data, even for consecutive scansof the same subject. Here, we propose a hybrid architecture that successively decouplesintensity and spatial features: we first remove irrelevant noise in the data with a denoisingCNN, and then use an E-CNN to extract robust spatial features. We apply our method tomotion tracking in fetal brain MRI, where it considerably outperforms CNNs and E-CNNs.Keywords: Equivariant CNN, Denoising, Motion tracking, Fetal brain MRI1. IntroductionModern image processing almost exclusively relies on convolutional neural networks (CNNs),which build hierarchical representations of images by applying learned convolutional filters.A long-desired property is to make these kernels equivariant to spatial transforms expectedto occur in the data. For example, convolutional filters are by nature equivariant to transla-tions: the outputs shift accordingly with the inputs. However, constructing kernels that areequivariant to transforms other than translation is challenging. Thus, equivariance is usu-allylearned with data augmentation, and possibly contrastive learning (Chen et al., 2020).If these strategies generally improve results, they do not explicitly ensure equivariance.Research on equivariant CNN filters has mainly focused on rigid transforms (i.e., trans-lations and rotations). While initial methods worked for discrete angles only (Winkels andCohen, 2018; Bekkers et al., 2018), fully rigid-equivariant filters were introduced by Weileret al. (2018). Since such filters are pre-computed and fixed, Moyer et al. (2021) recentlyproposed to pair them with learnable weights to form trainable equivariant CNNs (E-CNNs).©2023 CC-BY 4.0, B. Billot, D. Moyer, N. Karani, M. Hoffmann, E. Abaci Turk, P. Grant & P. Golland.Billot Moyer Karani Hoffmann Abaci Turk Grant GollandRemoved for testingTraining ImageRigidTransformSVDRandomAugmentationImage AFeatures ASpatial Means ADenoisingCNNEquivariantCNNLandmarksExtractionDenoisedImage ARandomAugmentationImage BFeatures BSpatial Means BDenoisingCNNEquivariantCNNLandmarksExtractionDenoisedImage BShared parametersFigure 1: Overview of the proposed framework for registration-based 3D motion tracking.These networks showed promising results when applied to motion tracking in fetal brainMRI (Moyer et al., 2021). However, they were only tested on simulated data that ignoredintensity and noise variations across scans. Here, we evaluate E-CNNs on real data, showingthat they have a limited learning capacity, which makes them sensitive to intensity changes.This is a major issue in medical imaging and especially in fetal MRI, where two consecutivescans may differ substantially due to scanner noise, motion artefacts, histogram shifting, etc.Here, we present a new method to decouple intensity and spatial features for registration-based tracking. We first use a CNN to remove sources of noise in the data, and then obtainexpressive spatial features with an E-CNN. While most disentangling strategies treat inten-sity and spatial features in parallel (Chartsias et al., 2019), we process them successivelyto fully leverage the potential of E-CNNs. We demonstrate our method in MRI fetal brainmotion tracking, where it yields considerably better results than CNNs and E-CNNs.2. MethodsAugmentation: A training step starts by randomly selecting a scan and independentlyaugmenting it twice to simulate differences between scans at testing (Fig.1). We first ran-domly translate ([ −30,30] voxels) and rotate ([ −π, π] rad) along all axes, and then add rand-om noise, motion artefacts, bias field, and histogram shifting (P ́ erez-Garc ́ ıa et al., 2021).Denoiser: We then employ a denoising CNN (D) to remove the previously injected noisefrom the augmented images. This step seeks to remove all anatomically irrelevant intensityfeatures, such that the following E-CNN can extract robust spatial features. Here we use aUNet (Ronneberger et al., 2015) with 5 levels, each with 2 convolutional layers (32 kernelsof size 3 ×3×3), ReLU non-linearities, and batch normalisation (except for the last layer).E-CNN: Denoised images are passed to an E-CNN for robust spatial feature prediction.We use rigid equivariant filters of size 5 ×5×5 corresponding to discretised spherical har-monics of order 0, 1 and 2 (Weiler et al., 2018). Layers are formed by linearly combining thekernels, where the linear coefficients are the learnable parameters. Here, we employ 5 equi-variant layers, each separated with equivariant ReLU non-linearities (Weiler et al., 2018).Rigid transform prediction: We then compute the spatial means (i.e., centre of mass)of all the E-CNN output features. Finally, we derive a rigid transform between these two setsof landmarks (one for each scan) with singular value decomposition (SVD) (Horn, 1987).Training: The denoiser D and the equivariant network are trained separately with theAdam optimiser and a learning rate of 10−5. The denoiser D is trained to remove the effectof intensity augmentation with an L2loss. The E-CNN is optimised using a geodesic loss(Salehi et al., 2018) between the ground truth and predicted transforms.2Short TitleMovingRegisteredFixedFigure 2: Original fetal brain MRI time-series (top) re-aligned by our method (bottom).Table 1: Scores obtained for Exp. 1 (rotation, translation, Dice 1) and Exp. 2 (Dice 2).Methods Angle error (◦) Shift error (vox.) Dice 1 Dice 2Conv. (de Vos et al., 2019) 14.7 ±4.2 5.2 ±1.2 0.78 0.77E-CNN (Moyer et al., 2021) 11.2 ±3.4 4.1 ±0.9 0.83 0.84Augm. + E-CNN 10.9 ±3.2 3.8 ±1.2 0.84 0.84Augm. + D + E-CNN (Ours) 5.1±1.9 2.5 ±0.9 0.90 0.893. Experiments and ResultsData: We use a fetal dataset of whole-uterus MRI time-series from 50 pregnant mothers.Scans are acquired on a 3T Skyra Siemens scanner using EPI sequences at 3mm isotropicresolution. Automatic brain masking is applied (Hoffmann et al., 2021) and dilated by 5voxels for uncertainty. We split the time-series into 30/5/15 for training/validation/testing.Baselines: We compare our method against a widely used CNN baseline for rigid regis-tration (“Conv”, de Vos et al. (2019)). We then perform ablations by successively removingthe denoiser (“Augm+E-CNN”) and the augmentation (“E-CNN”, Moyer et al. (2021)).Results: We first test all methods on simulated data obtained by augmenting the testscans as during training, such that we know the ground truth transforms (Exp. 1). Whilethe pure E-CNN yields slightly better scores than traditional CNNs, its learning capacityremains limited, as using augmentation only leads to marginal improvements (Tab. 1). Incomparison, employing a denoiser enables us to considerably outperform all methods.These results are confirmed when testing on the real time-series (Exp. 2), where ourmethod yields superior brain overlap between registered consecutive frames (Tab. 1, Fig. 2).4. Discussion and ConclusionBuilding on rigid-equivariant networks, we presented a new registration-based motion track-ing strategy that leverages a denoising CNN to decouple intensity and spatial features. Ourmethod substantially outperforms traditional CNNs and E-CNNs for motion tracking offetal brain MRI, and has the potential to be deployed online to improve fetal acquisitions.Acknowledgements: This research is supported by NIH NIBIB NAC P41EB015902,NIH NICHD R01HD100009, NIH NIBIB 5R01EB032708, NIH NICHD R00HD101553, andthe Swiss National Science Foundation.3Billot Moyer Karani Hoffmann Abaci Turk Grant GollandReferencesErik J. Bekkers, Maxime Lafarge, Mitko Veta, Koen Eppenhof, Josien Pluim, and RemcoDuits. Roto-Translation Covariant Convolutional Networks for Medical Image Analysis.InMedical Image Computing and Computer Assisted Intervention , pages 440–448, 2018.Agisilaos Chartsias, Thomas Joyce, Giorgos Papanastasiou, Scott Semple, MichelleWilliams, David Newby, Rohan Dharmakumar, and Sotirios Tsaftaris. DisentangledRepresentation Learning in Cardiac Image Analysis. Medical Image Analysis , 58, 2019.Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Frame-work for Contrastive Learning of Visual Representations. In International Conference onMachine Learning , volume 119, pages 1597–1607, 2020.Bob de Vos, Floris Berendsen, Max Viergever, Hessam Sokooti, Marius Staring, and IvanaIˇ sgum. A Deep Learning Framework for Unsupervised Affine and Deformable ImageRegistration. Medical Image Analysis , 52:128–143, 2019.Malte Hoffmann, Daniel Moyer, Lawrence Zhang, Polina Golland, Borjan Gagoski, EllenGrant, and Andr ́ e van der Kouwe. Learning-based Automatic Field-of-view Positioningfor Fetal-brain mri. In Proceedings of the International Society for Magnetic Resonancein Medicine , volume 29, pages 1362–1362, 2021.Berthold KP Horn. Closed-form Solution of Absolute Orientation using Unit Quaternions.Journal of Optical Society of America , 4:629–642, 1987.Daniel Moyer, Esra Abaci Turk, P. Ellen Grant, William M. Wells, and Polina Golland.Equivariant filters for efficient tracking in 3d imaging. In Medical Image Computing andComputer Assisted Intervention , pages 193–202, 2021.Fernando P ́ erez-Garc ́ ıa, Rachel Sparks, and S ́ ebastien Ourselin. TorchIO: a Python libraryfor efficient loading, preprocessing, augmentation and patch-based sampling of medicalimages in deep learning. Computer Methods and Programs in Biomedicine , 2021.O. Ronneberger, P.Fischer, and T. Brox. U-Net: Convolutional Networks for BiomedicalImage Segmentation. In Medical Image Computing and Computer-Assisted Intervention ,pages 234–241, 2015.Seyed Salehi, Shadab Khan, Deniz Erdogmus, and Ali Gholipour. Real-time Deep Pose Esti-mation with Geodesic Loss for Image-to-template Rigid Registration. IEEE Transactionson Medical Imaging , 38:470–481, 2018.Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3DSteerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. InAdvances in Neural Information Processing Systems , volume 31, pages 10381–10392, 2018.Marysia Winkels and Taco S. Cohen. 3D G-CNNs for Pulmonary Nodule Detection. InMedical Imaging with Deep Learning , 2018.4 |
brK-VVoDpqo | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionBrain age prediction using multi-hop graph attentionmodule(MGA) with convolutional neural networkHeejoo Lim1,2hjhjlim@ewhain.net1Division of Mechanical and Biomedical Engineering, Ewha W. University, Seoul, Korea2Graduate Program in Smart Factory, Ewha W. University, Seoul, KoreaYoonji Joo3yoonjijoo@ewha.ac.kr3Ewha Brain Institute, Ewha W. University, Seoul, Republic of KoreaEunji Ha3eunjiiha@ewha.ac.krYumi Song3,4youme.a.song@gmail.com4Department of Brain and Cognitive Sciences, Ewha W. University, Seoul, Republic of KoreaSujung Yoon3,4sujungjyoon@ewha.ac.krIn Kyoon Lyoo3,4,5,6inkylyoo@ewha.ac.kr5Graduate School of Pharmaceutical Sciences, Ewha W. University, Seoul, Republic of Korea6The Brain Institute and Department of Psychiatry, University of Utah, Salt Lake City, Utah,United StatesTaehoon Shin1,2taehoons@ewha.ac.krEditors: Under Review for MIDL 2023AbstractWe propose a multi-hop graph attention module (MGA) that addresses the limitation ofCNN in capturing non-local connections of features for predicting brain age. MGA con-verts feature maps to graphs, calculates distance-based scores, and uses Markov propertyand graph attention to capture direct and indirect connectivity. Combining MGA withsSE-ResNet18, we achieved a mean absolute error (MAE) of 2.822 years and Pearson’s cor-relation coefficient (PCC) of 0.968 using 2,788 T1-weighted MR images of healthy subjects.Our results present a possibility of MGA as a new algorithm for brain age prediction.Keywords: Brain age prediction, graph attention, self attention, deep learning1. IntroductionDeep learning applied to neuroimaging MRI can predict brain age which can serve as abiomarker of brain diseases.(Wang et al., 2019) While CNNs have been applied for brainage prediction, CNN focuses mainly on local features. To overcome this issue, we proposea novel multi-hop graph attention (MGA) module to enhance the performance of CNN.MGA can be flexibly applied to any type of CNN architecture and exploits direct andindirect connections among non-local feature domain regions in the middle of convolutionlayers. MGA was combined with sSE-ResNet18(Roy et al., 2018) for the final model, whichachieved the lowest MAE compared to other computer vision algorithms.©2023 CC-BY 4.0, H. Lim, Y. Joo, E. Ha, Y. Song, S. Yoon, I.K. Lyoo & T. Shin.Lim Joo Ha Song Yoon Lyoo ShinFigure 1: (A) An overview of the proposed multi-hop graph attention (MGA) module. (B)Model structure of MGA-sSE-ResNet18 for brain age prediction.2. MethodA schematic diagram of the proposed MGA module is shown in Fig1(A). Placed in betweenconvolution layers, the MGA module first constructs a graph structure by defining nodesand edges. After parcellating the feature map with a N ppatches using the patch ratioγ, which is a hyperparameter, we aggregate all spatial-channel dimensions of each patchby using global average pooling (GAP) and global max pooling (GMP). The two pooledtensors are then concatenated and serve as a node set: H={h1,h2, ...,hNp},hi∈R2. Wethen define the edge e ijbetween the two nodes h iand h jas follows.eij:= 1/(exp(∥Vhi−Vhj∥2)) (1)The use of the learnable embedding V(∈R2×2) is because the connections of patches aremore complex than computing the direct similarity of image features. We obtain m-hopedge matrix E(m)∀using the Markov property as follows. ̃E(m)∀= ̃E+β ̃E2+β2 ̃E3+...+βm−1 ̃Em=mXk=1βk−1 ̃Ek,E(m)∀:= ( ̃E(m)∀+ ̃E(m)∀T)/2 (2)Note that mandβare also hyperparameters that determine hop size and multi-hop weight,respectively. The formed graph sets go through a graph attention block with a gate op-eration that updates the patch set based on self-attention and obtains an updated featuremap. This procedure can be repeated in parallel for ksets of patches, and the resultingfeature updates are integrated to produce the final output of the module. We combinedMGA with sSE-ResNet18 for our final prediction model, as shown in Fig1(B).3. Experiment and ResultThree-dimensional T1-weighted MR images of 2,788 healthy subjects(age:20-70years) wereobtained from 7 public datasets: OpenNeuro, COBRE, Open fMRI, INDI, IXI, FCP1000,and XNAT. We randomly divided samples into three subsets: 1) the training dataset (19512multi-hop graph attention moduleFigure 2: (A): Effect of hop size m.(blue:test error, red:train error) (B): Effect of γandk.(gray: k=1, red: k=2) (C): Effect of multi-hop weight β.Model MAE PCCResNet18 3.249 0.948sSE-ResNet18 3.239 0.956DenseNet121 3.340 0.961SFCN 3.233 0.949TSAN 2.892 0.956MSA-sSE-ResNet18 3.216 0.960MGA-sSE-ResNet18 2.822 0.968MGA-ResNet18 3.065 0.955(1) Performance comparison withother SOTA models.(2) Scatter plots of (A) MGA-sSE-ResNet18,and (B) sSE-ResNet18.Figure 3: Comparison of model performance and scatter plots of the proposed model.samples), 2) the validation dataset (419 samples), and 3) the test dataset (418 samples).We first examined the key parameters of MGA, which is hop size m, patch ratio γ, numberof branches k, and multi-hop weight βwhere the results are displayed in Fig2. Fig2(A)shows that test MAEs of MGA with m<5 are lower than MAE of multi-head self-attention(MSA), indicating that it is beneficial to consider important embeddings only rather thanall when calculating the self-attention coefficients. The final network was chosen based onthe performance of the validation dataset. We also compared our model with 5 differentCNN models, where SFCN(Peng et al., 2021) and TSAN(Cheng et al., 2021) is the state-of-the-art model in brain age prediction field. In Fig3(1), MGA-sSE-ResNet18 achieved thelowest MAE(2.822 years) and highest PCC(0.968) among the comparisons. Other predictionmodels, such as Vision Transformer(ViT) or Graph attention network(GATs) were alsoevaluated, but showed a poor performance, presumably due to insufficient training data.It is also shown that implementing the MGA module reduces model bias and variance(Fig3(2)). From the results, we have shown that interleaving MGA with conventional CNNcan improve accuracy and thus effective for brain age prediction.AcknowledgmentThis research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded bythe Ministry of Education (NRF-2020R1A6A1A03043528), and Institute of Informationcommunications Technology Planning Evalua-tion (IITP) grant (No. RS-2022-00155966, Artificial Intelligence Convergence Innovation Human Resources Development).3Lim Joo Ha Song Yoon Lyoo ShinReferencesJian Cheng, Ziyang Liu, Hao Guan, Zhenzhou Wu, Haogang Zhu, Jiyang Jiang, Wei Wen, Dacheng Tao, and Tao Liu. Brain ageestimation from mri using cascade networks with ranking loss. IEEE Transactions on Medical Imaging , 40(12):3400–3412, 2021.Han Peng, Weikang Gong, Christian F Beckmann, Andrea Vedaldi, and Stephen M Smith. Accurate brain age prediction with lightweightdeep neural networks. Medical image analysis , 68:101871, 2021.Abhijit Guha Roy, Nassir Navab, and Christian Wachinger. Recalibrating fully convolutional networks with spatial and channel “squeezeand excitation” blocks. IEEE transactions on medical imaging , 38(2):540–549, 2018.Johnny Wang, Maria J Knol, Aleksei Tiulpin, Florian Dubost, Marleen de Bruijne, Meike W Vernooij, Hieab HH Adams, M ArfanIkram, Wiro J Niessen, and Gennady V Roshchupkin. Gray matter age prediction as a biomarker for risk of dementia. Proceedingsof the National Academy of Sciences , 116(42):21213–21218, 2019.4 |
fnIAVuZa9J | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023Digital Staining of Unpaired White and Blue LightCystoscopy Videos for Bladder Cancer Detection in theClinicShuang Chang1shuang.chang@vanderbilt.eduHaoli Yin2haoli.yin@vanderbilt.eduKristen Scarpato3kristen.r.scarpato@vumc.orgAmy Luckenbaugh3amy.n.luckenbaugh@vumc.orgSam Chang3sam.chang@vumc.orgChristian Bolenz4christian.bolenz@umm.deMaximilian C. Kriegmair5,6maximilian.kriegmair@medma.uni-heidelberg.deNikolaos C. Deliolanis6n.deliolanis@thericon.comSoheil Kolouri2soheil.kolouri@vanderbilt.eduAudrey Bowden1,7a.bowden@vanderbilt.edu1Vanderbilt University, Department of Biomedical Engineering, Nashville, TN, United States, 372322Vanderbilt University, Department of Computer Science, Nashville, TN, United States, 372323Vanderbilt University Medical Center, Department of Urology, Nashville, TN, United States, 372324University of Ulm, Department of Urology, Ulm, Germany5Urology Hospital Planegg, Department of Urology, Planegg, Germany6Heidelberg University, Medical Faculty Mannheim, Mannheim, Germany7Vanderbilt University, Department of Electrical Engineering, Nashville, TN, United States, 37232Editors: Accepted at MIDL 2023AbstractBlue light cystoscopy (BLC) has been shown to detect bladder tumors with better sensitiv-ity than white light cystoscopy (WLC); however, its increased cost and dye administrationtime have challenged widespread adoption of the technology. Here, we demonstrate a low-cost strategy to generate BLC images directly from WLC images. We performed digitalstaining of WLC images obtained from tumor resection procedures and demonstrate thatthe resulting digitally generated BLC images show strong resemblance to ground truthBLC images, with negligible degradation of the image quality.Keywords: Bladder cancer, cystoscopy, style transfer, deep learning.1. IntroductionThe high recurrence rate of urothelial carcinoma necessitates repeated surveillance cys-toscopy to detect suspicious lesions in the bladder. Left untreated, undetected or incom-pletely resected non-invasive cancers may progress to the muscle-invasive stage and requireaggressive treatment, including removal of the bladder. White Light Cystoscopy (WLC) iscommonly used to examine the bladder for suspicious lesions during a transurethral resectionof bladder tumor (TURBT) procedure. Blue Light Cystoscopy (BLC) utilizes an exogenouscontrasting dye that selectively accumulates in cancerous tissues. With the added contrast,©2023 CC-BY 4.0, S. Chang et al.Chang et al.BLC successfully reduces short-term recurrence by 10% and increases the detection rate ofhigh-grade tumors by 43%, compared to WLC. Despite the sensitivity advantage of BLC,the high cost of the system and the time and space needed to administer the dye prior toimaging have limited availability of BLC to few (less than 5%) hospitals in the U.S. andlimited use to the operating room. Moreover, a significant number of patients are unable toretain the dye for the required instillation time. A simple, quick and low-cost strategy toproduce BLC images would make improved detection sensitivity accessible to more hospitalsand enable affordability of BLC technology for use in clinics.To address the sensitivity limitation of WLC, a classification-based approach to identifytumor present in WLC frames, CystoNet, was introduced (Shkolyar et al., 2019). However,the model was trained on manually labeled WLC frames, which by definition involves asubset of tumors already detectable by human eyes with white light. To overcome thechallenge of low sensitivity, it is important to visualize tumors that are not currently seenunder white light imaging. In 2021, Ali et al. introduced a BLC-image-based artificialintelligence diagnostic platform, where they showed the classification of malignant lesionswith 95.77% sensitivity (Ali et al., 2021). However, the proposed platform can only beutilized in the few hospitals and clinics where BLC systems are already available.In our study, we aim to enable dye-free bladder tumor detection by using deep learningto create BLC-like images (that is, digitally generated BLC images) from WLC imagesthat having been digitally stained (Chang et al., 2023). To our knowledge, this is thefirst demonstration of digital staining on cystoscopy data. Our proposed workflow has thepotential to reduce the current gap in bladder cancer detection by improving the detectionsensitivity of WLC while increasing the accessibility of BLC-like images without the burdensof cost and dye administration.2. Methods and ResultsData collection and preparation. Data used in this study were originally collected for aproof-of-concept study of multiparametric cystoscopy for bladder cancer imaging (Kriegmairet al., 2020). A color camera equipped with a multi-bandpass filter and a multi-LED lightsource were used to collect near-simultaneous reflectance (i.e., WLC) and fluorescence (i.e.,BLC) frames through temporal multiplexing. Near-perfectly registered WLC and BLCvideos at a frame rate of 20 Hz were derived from the multispectral data collected, amongothers; the blue light videos provide ground truth data to evaluate our network. Videosfrom three patients were used for our study, where the frames included papillary tumors,flat tumors, and normal bladder tissue regions. The videos were concatenated, and pairedframes were extracted and cropped to 256 by 256 pixels to create sequential image data.The sequential data were then split to have the first 90% reserved for training and the last10% reserved for testing as the holdout set. To prepare the training data for the model,the WLC and BLC pairs were first synthetically unpaired by randomizing the order of theBLC frames while keeping the WLC frame order fixed. Then, the frames from both BLCand WLC were randomly split into 80/20 training and validation sets.Transfer model. To create a robust model for semantically-aware modality transfer,we trained our model using unpaired WLC/BLC image data following the Density Chang-ing Regularized Unpaired Image Translation (DECENT) method (Xie et al., 2022). We2Digital Staining of White Light Cystoscopy Videosemployed autoregressive flows for density estimation and used a ResNet-based generatorwith a PatchGAN discriminator. During the training process, we first updated the densityestimators, followed by updating the discriminator and optimizing the generator with thePolyak-averaged version of the density estimator and the LSGAN (Least Squares GAN)objective to help stabilize the learning process. The model consisted of three terms fromthe original method: an adversarial loss, an identity mapping loss, and a density changingloss. The outputs are the digitally stained WLC frames or, equivalently, dgBLC frames.Evaluation metrics. We defined three categories of analysis metrics that evaluate thestaining accuracy, color consistency and overall image quality. Staining accuracy assessmentwas performed by creating a fluorescence segmentation mask. Using the BLC data as theground truth, we compared its masks with those for the corresponding dgBLC data andcomputed the percentages of correctly and incorrectly stained pixels (i.e., red pixel in theground truth showing up as blue pixel in dgBLC, or vice versa). To assess the realisticappearance of the network output, the color of the dgBLC images was analyzed in theYCbCr color space for each color channel. For overall image quality, both reference-based(FSIM, PSNR) and reference-less (BRISQUE) image quality metrics were computed.Table 1 reports the mean and standard deviation of the assessment metrics, wherewe observed excellent agreement in staining area and color and negligible degradation inoverall image quality. Figure 1 shows two examples of original WLC-BLC pairs and theoutput dgBLC images. It is important to note that while this proof-of-concept study uses aregistered dataset for quality evaluation purposes, our approach does not rely on registereddatasets. In the future work, we will train the model using clinically acquired WLC andBLC videos, where the video frames are no longer perfectly registered.Table 1: Assessment metrics calculated from digitally stained frames from the testing set,using the corresponding BLC frames as reference.Figure 1: Two examples of WLC-BLC image pair and dgBLC results.3Chang et al.AcknowledgmentsReferencesNairveen Ali, Christian Bolenz, Tilman Todenh ̈ ofer, Arnulf Stenzel, Peer Deetmar, Mar-tin Kriegmair, Thomas Knoll, Stefan Porubsky, Arndt Hartmann, J ̈ urgen Popp, Max-imilian C. Kriegmair, and Thomas Bocklitz. Deep learning-based classification of bluelight cystoscopy imaging during transurethral resection of bladder tumors. ScientificReports , 11(1):1–10, 2021. ISSN 20452322. doi: 10.1038/s41598-021-91081-x. URLhttps://doi.org/10.1038/s41598-021-91081-x .Shuang Chang, Ali Abbasi, Kristen Scarpato, Amy Luckenbaugh, Sam Chang, SoheilKolouri, and Audrey K Bowden. Bringing blue light cystoscopy to the office: digi-tal staining on matched white and blue light cystoscopy videos. In Proc.SPIE , vol-ume PC12368, page PC123680P, mar 2023. doi: 10.1117/12.2649276. URL https://doi.org/10.1117/12.2649276 .Maximilian C. Kriegmair, Jan Rother, Bart lomiej Grychtol, Martin Theuring, Manuel Rit-ter, Cagatay G ̈ unes, Maurice S. Michel, Nikolaos C. Deliolanis, and Christian Bolenz.Multiparametric Cystoscopy for Detection of Bladder Cancer Using Real-time Mul-tispectral Imaging. European Urology , 77(2):251–259, 2020. ISSN 18737560. doi:10.1016/j.eururo.2019.08.024.Eugene Shkolyar, Xiao Jia, Timothy C. Chang, Dharati Trivedi, Kathleen E. Mach,Max Q.H. Meng, Lei Xing, and Joseph C. Liao. Augmented Bladder Tumor Detection Us-ing Deep Learning. European Urology , 76(6):714–718, dec 2019. ISSN 18737560. doi: 10.1016/j.eururo.2019.08.032. URL https://doi.org/10.1016/j.eururo.2019.08.032 .Shaoan Xie, Qirong Ho, and Kun Zhang. Unsupervised Image-to-Image Translation withDensity Changing Regularization. Nips, (NeurIPS):1–14, 2022. URL https://github.com/Mid-Push/ .4 |
MCAgRjgh6v | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionGammaFocus: An image augmentation method to focusmodel attention for classificationAna Leni Frei1ana.frei@unibe.chAmjad Khan1amjad.khan@unibe.chPhilipp Zens1philipp.zens@unibe.chAlessandro Lugli1alessandro.lugli@unibe.chInti Zlobec1inti.zlobec@unibe.chAndreas Fischer2,3andreas.fischer@unifr.ch1Institute of Tissue Medicine and Pathology, University of Bern, Switzerland2Document, Image and Video Analysis Research Group, University of Fribourg, Switzerland3iCoSyS, University of Applied Sciences and Arts Western Switzerland, Fribourg, SwitzerlandEditors: Under Review for MIDL 2023AbstractIn histopathology, histologic elements are not randomly located across an image but orga-nize into structured patterns. In this regard, classification tasks or feature extraction fromhistology images may require context information to increase performance. In this work,we explore the importance of keeping context information for a cell classification task onHematoxylin and Eosin (H&E) scanned whole slide images (WSI) in colorectal cancer. Weshow that to differentiate normal from malignant epithelial cells, the environment aroundthe cell plays a critical role. We propose here an image augmentation based on gammavariations to guide deep learning models to focus on the object of interest while keepingcontext information. This augmentation method yielded more specific models and helpedto increase the model performance (weighted F1 score with/without gamma augmentationrespectively, PanNuke: 99.49 vs 99.37 and TCGA: 91.38 vs. 89.12, p <0.05).Keywords: digital pathology, gamma correction, image augmentation, contrast enhance-ment, image classification1. IntroductionDigital Pathology whole slide images (WSI) are large images that need to be divided intosmaller patches in order to apply deep learning methods for classification of histologicelements (Lee K and AC, 2021; Janowczyk and Madabhushi, 2016). When performingclassification of small tissue fractions, such as cells, the optimal crop size around the regionof interest (ROI) can be difficult to estimate. The object of interest can be difficult toclassify by itself and might need contextual tissue information to make the proper decision.For that reason, the crop size should be optimized to find the correct balance betweenthe amount of information coming from the object of interest and surrounding contextinformation. In this work, we show that for cell-based classification, the context of the cellsmatters and use epithelial cell classification in colorectal tissue as an example. We firstevaluated different patch sizes around the ROIs and showed that performance increasedwith larger patch sizes. However, the information originating from the object of interest©2023 CC-BY 4.0, A.L. Frei, A. Khan, P. Zens, A. Lugli, I. Zlobec & A. Fischer.Frei Khan Zens Lugli Zlobec Fischermight be diluted when taking patches much larger than the object itself. To address thisundesired effect, we propose here an image augmentation based on gamma variations toincrease the contrast in the region of the object to be classified and decrease the brightnessof its surroundings (Nateghi et al., 2021). We show that by using this gamma focusing, theclassification accuracy significantly improve while guiding the model to focus on that areaof interest even when using large patch sizes.2. Materials and MethodsThe dataset was composed of epithelial cells extracted from Lizard dataset, as well asepithelial cells annotated by experts on TCGA and our Institute’s cohort (Graham et al.,2021). This resulted in 66’034 normal cells and 119’013 malignant cells for training andvalidation. Cells extracted from TCGA and PanNuke (subset from Lizard) were kept asunseen test data with 12’751 normal cells and 17’717 malignant cells. All the images wereused at 20X (0.5 μm/pixel) and were cropped around cell centroids to extract cell patches.ResNet18 and MobileNet were trained for normal versus malignant epithelial cell clas-sification (He et al., 2015; Sandler et al., 2019). As the morphology of malignant cells canvary in size, the smallest patch size used was 32 ×32px in order to get the complete cell inthe patch. The models were trained for different patch sizes around the cells: 32 ×32px,64×64px and 128 ×128px. All models were trained using a 5 fold cross-validation. Table 1shows the model’s performance for the different patch sizes. As the highest F1 score wasachieved using 128 ×128px patches, these results were analysed with GradCam and furtherimproved using the GammaFocus augmentation. Finally, ViT16 was trained to comparethe performance using a model made to retain spatial structural information (Dosovitskiyet al., 2021). Models were compared using the statistical McNemar test for paired samples.GammaFocus : The GammaFocus (GF) augmentation rely on the gamma correction toadjust contrast in image analysis (Somasundaram and Kalavathi, 2012; Rahman et al.,2016). This correction is a non-linear transformation that encodes the brightness of theimage. It is based on the following power law expression:Iout=Iγinwhere γencodes the changes in brightness. γ > 1 implies a gamma expansion and thusincreases the contrast, as γ < 1 is a gamma compression and reduces the contrast. ForRGB images, the gamma augmentation is applied per-channel.In our experimental setup, the brightness in the center 64 ×64px of the 128 ×128px inputpatches was increased, as the bightness surrounding this central region was decreased. Forthat we used γ= 1.5 and γ >0.5 respectively. GF augmentation on H&E cell patches canbe seen in Figure 1.During the training process, multiple other image augmentations were applied (rotation,flip, stain variations) before applying the GF transform.GradCam : GradCam method was used to highlight regions impacting most the model’sdecision when trained with and without GammaFocus (Selvaraju et al., 2019; Gildenblatand contributors, 2021). GradCam heatmaps overlayed on the input images can be seen inFigure 1.2GammaFocusFigure 1: GradCam heatmap over cellpatches from PanNuke andTCGA, with/without GF forResNet18. Black dotted circleshighlight the cell of interest.Patch size (px) PanNuke TCGA32×32 91.00 ±0.006 68.12 ±0.0164×64 96.99 ±0.002 81.43 ±0.016128×128 98.58 ±0.04 89.12 ±0.12Table 1: ResNet18 weighted F1 score fordifferent patch sizes around theepithelial cells for normal vs.malignant binary classification.Method PanNuke TCGAViT16 99.10 ±0.004 89.47 ±0.004ViT16 + GF 99.31 ±0.003 88.55 ±0.01MobileNet 99.37 ±0.004 89.39 ±0.01MobileNet + GF 99.49 ±0.002 90.13 ±0.01ResNet18 98.58 ±0.04 89.12 ±0.12ResNet18 + GF 99.13 ±0.08 91.38 ±0.17Table 2: Weighted F1 score for differentmethods with and without Gam-maFocus (GF) for normal vs.malignant binary classification.3. ResultsClassification accuracy increased with the patch size, Table 1. However, GradCam heatmapsshow that the model did not necessarily use the cell of interest for the classification decision.Upon GF augmentation, the model paid more attention to the cells of interest, see Figure1, and the classification F1 score also increased, performing significantly better than modelstrained without GF, as can be seen in Table 2, p <0.05 for ResNet18 and MobileNet. Bestresults were obtained by applying GF during training and inference.4. Discussion and ConclusionThe GF augmentation method allows to take larger crops around the ROI while guidingthe model to focus mainly on the object to be classified and increase models’ performances.The behaviour of ViT16 with GF is not as clear as ResNet18 and Mobilenet and should beexplored further in future work.AcknowledgmentsThis work was funded by the Swiss National Science Foundation (CRSII5 193832). Resultspresented here are based upon data generated by the TCGA Research Network.3Frei Khan Zens Lugli Zlobec FischerReferencesAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, SylvainGelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformersfor image recognition at scale, 2021.Jacob Gildenblat and contributors. Pytorch library for cam methods. https://github.com/jacobgil/pytorch-grad-cam , 2021.Simon Graham, Mostafa Jahanifar, Ayesha Azam, Mohammed Nimir, Yee Wah Tsang,Katherine Dodd, Emily Hero, Harvir Sahota, Atisha Tank, Ksenija Benes, Noorul Wa-hab, Fayyaz Minhas, Shan E Ahmed Raza, Hesham El Daly, Kishore Gopalakrishnan,David Snead, and Nasir Rajpoot. Lizard: A large-scale dataset for colonic nuclear in-stance segmentation and classification. in 2021 IEEE/CVF International Conference onComputer Vision Workshops (ICCVW) , pages 684–693, 2021.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition, 2015.Andrew Janowczyk and Anant Madabhushi. Deep learning for digital pathology image anal-ysis: A comprehensive tutorial with selected use cases. Journal of Pathology Informatics ,7(1):29, 2016. ISSN 2153-3539. doi: https://doi.org/10.4103/2153-3539.186902. URLhttps://www.sciencedirect.com/science/article/pii/S2153353922005478 .Xie M Chaudhary R Slebos RJC Flores ER Chung CH Lee K, Lockhart JH and Tan AC.Deep learning of histopathology images at the single cell level. Front. Artif. Intell. , 2021.doi: 10.3389/frai.2021.754641.Ramin Nateghi, Habibollah Danyali, and Mohammad Sadegh Helfroush. A deep learningapproach for mitosis detection: Application in tumor proliferation prediction from wholeslide images. Artificial Intelligence in Medicine , 114:102048, 2021. ISSN 0933-3657.doi: https://doi.org/10.1016/j.artmed.2021.102048. URL https://www.sciencedirect.com/science/article/pii/S0933365721000415 .Shanto Rahman, Mostafijur Rahman, M. Abdullah-Al-Wadud, Golam Dastegir Al-Quaderi,and Mohammad Shoyaib. An adaptive gamma correction for image enhancement.EURASIP Journal on Image and Video Processing , 35, 2016.Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.Mobilenetv2: Inverted residuals and linear bottlenecks, 2019.Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, DeviParikh, and Dhruv Batra. Grad-CAM: Visual explanations from deep networks viagradient-based localization. International Journal of Computer Vision , 128(2):336–359,oct 2019.Karuppana Gounder Somasundaram and P. Kalavathi. Medical image contrast enhancementbased on gamma correction. 2012.4 |
7mwxN2h7SM | Medical Imaging with Deep Learning 2023An end-to-end Complex-valued Neural Network approachfork-space interpolation in Parallel MRIPoornima Jain poornima.jain@iiitb.ac.inNeelam Sinha neelam.sinha@iiitb.ac.inG. Srinivasaraghavan gsr@iiitb.ac.inInternational Institute of Information Technology BangaloreAbstractParallel MRI techniques in the k-space , like GRAPPA are widely used in accelerated MRI.Recently neural-network approaches have shown improved performance over linear methodslike GRAPPA. But present day neural networks are largely tailored to process real data,hence the complex-valued k-space data is processed as two-dimensional real data in these.In this work, we study the performance of an end-to-end complex-valued architecture forinterpolating missing values in the k-space for parallel MRI which we call the ComplexrRAKI . We propose a novel activation function, the PlaneReLU , which is a generalizedversion of the ReLU on the complex plane. The performance of the Complex rRAKI isevaluated on two publicly-available k-space MRI datasets, the fastMRI multicoil brain andknee datasets. Comparison of obtained results with those on the baseline rRAKI are alsopresented. The proposed Complex rRAKI achieves improved performance over the baselinewith respect to standard metrics SSIM and NRMSE with 50% fewer parameters.Keywords: Complex-valued neural network, Complex ReLU, Parallel MRI, GRAPPA1. IntroductionParallel MRI uses multiple acquisition coils to acquire partial k-space data and then exploitthe position information of the coils to obtain high quality reconstructions from the data.The GRAPPA (Griswold et al., 2002) method estimates missing values in the k-space byassuming them to be linearly-dependent on neighboring acquired values. Recently, scan-specific neural network approaches (Zhang et al., 2019; Arefeen et al., 2021) have beendeveloped to learn a potentially non-linear relationship instead. However real-valued neuralnetworks may not be able to exploit information in inherently complex-valued datasets. AsMRI sensor data is complex-valued, few works have adapted complex-valued neural networksto MRI reconstruction (Cole et al., 2020; Chatterjee et al., 2021; Vasudeva et al., 2020). Butmost of these works denoise poor quality zero-filled reconstructions. When the zero-filledreconstructions have a lot of artefacts, there may be loss of important details which cannotbe reconstructed back. Thus, it is important to work with k-space directly. Also to thebest of our knowledge, none of the previous works explores a scan-specific complex-valuedneural network, thus relying on huge datasets for training. In this work, we implementan end-to-end complex-valued neural network for the scan-specific Residual RAKI (Zhanget al., 2019) approach. The major contributions in this work are -1. We implement an end-to-end complex-valued neural network trained using complex-valued optimization, called the Complex rRAKI for a scan-specific approach for par-allel MRI called the Residual RAKI (rRAKI)©2023 CC-BY 4.0, P. Jain, N. Sinha & G. Srinivasaraghavan.Jain Sinha Srinivasaraghavan2. We propose a novel activation function, the PlaneReLU , which is a generalized versionof the ReLU activation function on the complex plane2. MethodsLetI∈CH×W× ̄Cbe an input k-space of shape ( H, W, ̄C) where HandWrefer to the heightand width and ̄Cis the number of coils, also known as number of channels. Let ACjdenotethe autocalibration region in channel j,j= 1,2, .. ̄C. Let yjdenote the vector containingtarget elements at locations ( kx, ky)∈ACjandysource denote the vector containing thecorresponding neighboring elements in the autocalibration regions across all channels. ThenComplex rRAKI is trained over the autocalibration region by minimizing the cost functionL(γj, θj) = minγj,θj∥yj−Gj(ysource ;γj)−Fj(ysource ;θj)∥2+λ∥yj−Gj(ysource ;γj)∥2(1)where Gj:Cn→Cmis a linear complex convolution operator parameterized by γjandFj:Cn→Cmis a complex-valued CNN parameterized by θj, and having two blocks ofcomplex convolution and PlaneReLU (Section 2.1) activation function followed by a com-plex convolution operation. L(γj, θj) :C→Ris real-valued as it computes the L2 normbetween the complex output and target. After learning GjandFjfor each channel j, theinterpolation for the vector of missing values sin the channel jis performed ass(kx, ky, j) =Gj(N(kx, ky)) +Fj(N(kx, ky)) (2)where N(kx, ky) denotes the neighborhood for the corresponding missing point ( kx, ky)across all channels.2.1. Proposed Complex-valued Activation Function : the PlaneReLUWe propose a version of ReLU defined over the complex plane called the PlaneReLU . Foran input z=x+iy∈Ci.e.,x, y∈R, the PlaneReLU is defined as followsPlaneReLU (x+iy) =(x+iy, ifax+by+c≥0a+b+cα(x+iy),otherwise(3)where a, b, c∈Rare learnable parameters and αis a hyperparameter that we set to 3.The PlaneReLU activation function divides the complex plane into two halves about thelineax+by+c= 0. It fires the input as is in one half of the plane and fires a scaled versionof the input in the other half. The parameters a, bandcare learnt to define a suitableline according to the training dataset. The PlaneReLU considers both magnitude andphase information while firing without any bias towards either, unlike other ReLU-inspiredcomplex-valued activation functions like the zReLU andmodReLU (Trabelsi et al., 2018).It also does not distort the input phase, unlike the CReLU (Trabelsi et al., 2018).3. Results, Discussion and ConclusionsThe performance of rRAKI and Complex rRAKI architectures on the fastMRI multicoilbrain and knee datasets (Zbontar et al., 2018) are presented in Table 1 and Figure 1. By2Complex-valued Neural Network for Parallel MRITable 1: Average PSNR, NRMSE and SSIM metrics for k-space data from fastMRI multicoilbrain and knee datasets at an acceleration factor of 5 with cartesian undersamplingfastmri multicoil Brain datasetMetric PSNR NRMSE SSIMrRAKI 31.51±1.3 0.20±0.041 0.84±0.036Complex rRAKI 31.83±0.79 0.23±0.08 0.87±0.027fastmri multicoil Knee datasetrRAKI 28.7±0.73 0.45±0.09 0.60±0.07Complex rRAKI 29±0.49 0.35±0.047 0.67±0.05Figure 1: Sample reconstructions and difference images of rRAKI and Complex rRAKIon (a) fastMRI Knee dataset and (b) fastMRI Brain dataset. The pixel-levelcomparison of the 300thindex in the output is also shown.achieving improved or comparable performance with the SOTA methodology w.r.t. SSIM,PSNR and NRMSE metrics, with 50% fewer parameters (the complex convolution layerhas 50% fewer parameters than the corresponding real layer (Jain et al., 2022)), ComplexrRAKI, along with its novel PlaneReLU activation function, shows promising potentialfor exploring complex-valued neural networks in the k-space domain as well as other suchcomplex-valued domains. The improved performance of Complex rRAKI is attributed tothe structure of its network which respects the complex-valued algebraic structure of theinput, thus constraining the degrees of freedom in the neural network and assisting improvedlearning.Source code: https://github.com/jain-p9/Complex-rRAKIAcknowledgement: The authors gratefully acknowledge the financial support of thisproject through the Mphasis F1 foundation.3Jain Sinha SrinivasaraghavanReferencesYamin Arefeen, Onur Beker, Jaejin Cho, Heng Yu, Elfar Adalsteinsson, and Berkin Bil-gic. Scan-specific artifact reduction in k-space (SPARK) neural networks synergize withphysics-based reconstruction to accelerate MRI. Magnetic Resonance in Medicine , 87(2):764–780, oct 2021. doi: 10.1002/mrm.29036.Soumick Chatterjee, Chompunuch Sarasaen, Alessandro Sciarra, Mario Breitkopf, SteffenOeltze-Jafra, Andreas N ̈ urnberger, and Oliver Speck. Going beyond the image space:undersampled mri reconstruction directly in the k-space using a complex valued residualneural network. In 2021 ISMRM & SMRT Annual Meeting & Exhibition , page 1757, 052021.Elizabeth K. Cole, Joseph Y. Cheng, John M. Pauly, and Shreyas S. Vasanawala. Analysis ofdeep complex-valued convolutional neural networks for mri reconstruction. arXiv, 2020.doi: 10.48550/ARXIV.2004.01738.Mark Griswold, Peter Jakob, Robin Heidemann, Mathias Nittka, Vladimir Jellus, JianminWang, Berthold Kiefer, and Axel Haase. Generalized autocalibrating partially parallelacquisitions (grappa). Magnetic resonance in medicine : official journal of the Societyof Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine , 47:1202–10, 06 2002. doi: 10.1002/mrm.10171.Poornima Jain, Chakka Sai Pradeep, and Neelam Sinha. The complex-valued pd-net formri reconstruction of knee images. In 2022 44th Annual International Conference of theIEEE Engineering in Medicine Biology Society (EMBC) , pages 2093–2096, 2022. doi:10.1109/EMBC48229.2022.9872016.Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep Subramanian,Joao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, and Christo-pher J Pal. Deep complex networks. In International Conference on Learning Represen-tations , 2018.Bhavya Vasudeva, Puneesh Deora, Saumik Bhattacharya, and Pyari Mohan Pradhan. Co-vegan: Complex-valued generative adversarial network for compressive sensing mr imagereconstruction. ArXiv , abs/2002.10523, 2020.Jure Zbontar, Florian Knoll, Anuroop Sriram, Matthew Muckley, Mary Bruno, Aaron De-fazio, Marc Parente, Krzysztof Geras, Joe Katsnelson, Hersh Chandarana, Zizhao Zhang,Michal Drozdzal, Adriana Romero, Michael Rabbat, Pascal Vincent, James Pinkerton,Duo Wang, Nafissa Yakubova, Erich Owens, and Tullie Murrell. fastmri: An open datasetand benchmarks for accelerated mri. 11 2018.Chi Zhang, Seyed Amir Hossein Hosseini, Steen Moeller, Sebastian Weing ̈ artner, KamilUgurbil, and Mehmet Akcakaya. Scan-specific residual convolutional neural networks forfast mri using residual raki. In 2019 53rd Asilomar Conference on Signals, Systems, andComputers , pages 1476–1480, 2019. doi: 10.1109/IEEECONF44664.2019.9048706.4 |
UT7nTUxpaJZ | Medical Imaging with Deep Learning 2023High Frequency Structural MRI Signal conditioned MRASynthesis with Denoising Diffusion probabilistic ModelHaoyu Lan haoyulan@usc.eduKirsten M. Lynch Kirsten.Lynch@loni.usc.eduArthur W. Toga toga@usc.eduJeiran Choupan choupan@usc.eduLaboratory of Neuro Imaging, USC Mark and Mary Stevens Neuroimaging and Informatics Institute,USC Keck School of Medicine, University of Southern California, Los Angeles, California, USAAbstractMagnetic resonance angiography (MRA) allows for the non-invasive visualization of vascu-lature in the human body and has been widely used in the hospitals to identify aneurysmsand the location of a stroke. Generating MRA using the commonly available T1-weighted(T1w) MRI modality would broaden the possibilities for studying vasculature because T1wis commonly acquired in most neuroimaging datasets, while MRA is not. In this work, wepropose a method using the statistical generative model called denoising diffusion prob-abilistic model (DDPM) to tackle the MRA synthesis task. Our experiment shows thatby diffusing the high frequency signal, which explains the major signal difference betweenMRA and T1w, DDPM could successfully synthesize MRA with good quality. The pro-posed method also conditioned score-matching estimation with the high frequency signalof the T1w modality, which enables the accurate one-to-one synthesis between MRA andT1w.Keywords: MRA, diffusion probabilistic model, high frequency signal, synthesis, vascu-lature1. IntroductionMagnetic resonance angiography (MRA) is one of the commonly used neuroimaging modal-ities to visualize cerebrovascular anatomy and vessel abnormalities. As a non-invasive tech-nique, MRA is a safe alternative to traditional angiography methods and can aid in thediagnosis and treatment planning for vascular conditions, such as stroke and aneurysms.Additionally, recent evidence has implicated the cerebrovascular system in many other neu-rological conditions, such as Alzheimer’s disease, and is a crucial component of the wasteclearance system. Therefore, improved visualization of brain vasculature can provide criti-cal insight into overall brain health. However, compared to the common T1-weighted (T1w)magnetic resonance imaging (MRI) modality, MRA is generally not acquired in large publicdatasets, which hinders the study of vasculature in large populations. Statistical genera-tive models like generative adversarial networks (Olut et al., 2018) has shown the powerto handle the neuroimaging modality synthesis task. In this work, we propose to use therecent state-of-the-art generative model denoising diffusion probabilistic model (DDPM)(Ho et al., 2020) to synthesize MRA using T1-weighted modality by only diffusing the highfrequency signal in the imaging space.©2023 CC-BY 4.0, H. Lan, K.M. Lynch, A.W. Toga & J. Choupan.Lan Lynch Toga Choupan2. MethodsWe used MRA and T1w images acquired from 18 healthy subjects in the TubeTK dataset(Aylward and Bullitt, 2002) (https://public.kitware.com/Wiki/TubeTK/Data) for the modeltraining and testing (acquisition parameters – MRA: 128 axial slices, 448 x 448 matrix size,0.5x0.5x0.8 mm3 voxel size; T1w: 170 axial slices, 256 x 256 matrix size, 1x1x1 mm3 voxelsize). Each T1w image was spatially aligned and resampled to the corresponding MRAimage in native space using FreeSurfer (Fischl, 2012), so that two images are properly regis-tered. Both MRA and T1w modalities were normalized using min-max normalization withmaximum intensity thresholds 1000 for MRA and 700 for T1w. Of the 2124 total pairedaxial slices available from the MRA and T1w images of all subjects, 1700 randomly selectedslices were used for training and validation and 424 slices were used for testing.For the same subject, T1w MRI and MRA share similar low frequency signals represen-tative of gross anatomical features, such as brain shape and volume (Figure 1 A.). Mostof the signal differences between T1w and MRA could be explained by the high frequencysignal (Figure 1 A.). Inspired by DDPM (Ho et al., 2020) and latent conditional diffusionmodel (Rombach et al., 2022), we propose to tackle the MRA synthesis problem by diffus-ing the high frequency signal through the high frequency T1w signal conditioned diffusionprocess (Figure 1 B.). We designed the diffusion process with 100 steps and the varianceof high frequency gaussian noise ranges between 0.001 and 0.2. Score matching model hasbeen implemented following the original DDPM (Ho et al., 2020).Figure 1: Low frequency and high frequency signal reconstructed modalities and proposeddiffusion process.2Short Title3. Results and discussionQualitative assessment of synthetic MRA is shown in Figure 2. By limiting the diffusionprocess on the high frequency signal space and conditioning the score-matching estimationon the high frequency signal of the T1w, the proposed method synthesizes MRA from thecorresponding T1w modality with only a few denoising diffusion steps and high synthesisquality.In this work, we aim to synthesize MRA modality using high frequency structural T1wMRI as the condition. Most recent successful generative model DDPM has the advantageof stable training compared to GAN model and guaranteed convergence with small variancenoise for each diffusion step. As results shown in Figure 2, by focusing the diffusion processon the high frequency signal, the diffusion model could generate MRA with accurate vesseldistribution and morphology for the given T1w modality. Thorough model evaluation andcomparison will be our next work and we aim to unlock the potential of generative modelapplications in large-scale neuroimaging datasets and to have neuroimaging research onneurological disease benefit from this application.Figure 2: Qualitative assessment of MRA synthesis in axial view.3Lan Lynch Toga ChoupanReferencesStephen R Aylward and Elizabeth Bullitt. Initialization, noise, singularities, and scalein height ridge traversal for tubular object centerline extraction. IEEE transactions onmedical imaging , 21(2):61–75, 2002.Bruce Fischl. Freesurfer. Neuroimage , 62(2):774–781, 2012.Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Ad-vances in Neural Information Processing Systems , 33:6840–6851, 2020.Sahin Olut, Yusuf H Sahin, Ugur Demir, and Gozde Unal. Generative adversarial trainingfor mra image synthesis using multi-contrast mri. In PRedictive Intelligence in MEdicine:First International Workshop, PRIME 2018, Held in Conjunction with MICCAI 2018,Granada, Spain, September 16, 2018, Proceedings 1 , pages 147–154. Springer, 2018.Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈ orn Om-mer. High-resolution image synthesis with latent diffusion models. In Proceedings ofthe IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 10684–10695, 2022.4 |
BNL83dNAiE | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 submissionRadiomics using disentangled latent features from deeprepresentation learning in soft-tissue sarcomaTimothy Sum Hon Mun1timothy.sumhonmun@icr.ac.ukAmani Arthur1amani.arthur@icr.ac.ukImogen Thrussell1imogen.thrussell@icr.ac.ukJessica Winfield1,2jessica.winfield@icr.ac.ukDow-Mu Koh1,2dow-mu.koh@icr.ac.ukPaul Huang1paul.huang@icr.ac.ukChristina Messiou1,2christina.messiou@rmh.nhs.ukSimon J Doran1simon.doran@icr.comMatthew D Blackledge1matthew.blackledge@icr.ac.uk1Institute of Cancer Research, London, United Kingdom2Royal Marsden NHS Foundation Trust, London, United KingdomAbstractDetecting the treatment response of radiotherapy for rare cancers such as soft-tissuesarcomas (STS) is difficult due to the intratumoral heterogeneity of the disease withintumors. STS are a group of diseases with more than 70 recognized subtypes with eachof them having distinctive histological and clinical-pathological characteristics. ApparentDiffusion Coefficient (ADC) mapping provides a quantitative measure of the magnitude ofwater diffusion in biological tissues which can provide insight into the microstructure oftissues. An unsupervised deep representation learning pipeline that can learn disentangledand interpretable radiomics features from the Apparent Diffusion Coefficient (ADC) mapsof patients has been developed and the learnt latent features have been assessed for baselinetest-retest repeatability as well as for outcome prediction in a pilot cohort.Keywords: MRI, Diffusion-Weighted Imaging, Radiomics, Deep Learning, RepresentationLearning1. IntroductionSoft tissue sarcomas (STS) demonstrate intratumoral heterogeneity, making it difficult tosuccessfully monitor response to treatment using conventional size-based criteria. Radiomics(Gillies et al., 2016) can offer opportunities to find novel biomarkers of treatment responseby quantifying the level of intra-tumoral heterogeneity within tumours via the measurementof “hand-crafted” features that aim to represent tumour image statistics, shape, and texture.A potential disadvantage of these features is that they are not necessarily data-drivenand thus may miss characteristics within the image that could be important for demonstrat-ing tumour response. Deep learning (Sum Hon Mun et al., 2022) has also been shown to beable to extract interesting features but it can be hard to interpret these features especiallyif they are extracted directly from the layers of convolutional neural networks (CNNs).In this work, we explore the use of generative models known as Variational Autoencoders(VAEs) (Kingma and Welling, 2013) which learn a mapping between the original image©2023 CC-BY 4.0, T.S.H. Mun et al.Mun Arthur Thrussell Winfield Koh Huang Messiou Doranand latent feature representation that can accurately reconstruct the image. We applythis approach to maps of apparent diffusion coefficient (ADC) as derived from diffusion-weighted MR-imaging of a pilot cohort of patients with retroperitoneal soft-tissue sarcoma(STS). To determine the potential sensitivity of the derived latent features to potential post-therapeutic change, we investigate their test-retest repeatability. We subsequently evaluatetheir predictive potential in a multivariate regression model of recurrence-free survival whencombined with the patient demographic features.2. Data and MethodsDataset Description : Baseline and repeat baseline scans for 22 patients were acquiredusing axial diffusion-weighted imaging (DWI) (b = 50,600,900 s/mm2); ADC maps wereextracted from these images using a least-squares monoexponential fit. A single radiologistoutlined tumour regions-of-interest (ROIs) on T2-weighted images, which were subsequentlytransferred to the ADC maps. Augmentation techniques including rotation, translation,scaling, and flipping were performed independently for each slices to generate 2313 imagesin total from baseline scans, split into 2082 training and 231 validation. Input data consistedof two channels: (i) the complete ADC map slice, and (ii) the ADC map slice masked bythe tumour ROI. The second channel allowed the network to focus on the features thatrepresent the tumour region, whilst the first provided information about tissue surroundingthe tumour. Both channels were resized using bilinear interpolation to 64 x 64.Model : Our VAE model architecture consisted of a 3-layer, 2D convolution encoder/decoderwith a kernel size of three with the following channels [16, 32, 64] (encoder) →7 (latentfeatures) →[64, 32, 16, 2] (decoder).We used the beta-VAE (Higgins et al., 2017) variant to tune emphasis on the Kullback-Leibler divergence loss of the features ( β= 0.5), using the following parameters: learningrate = 0.0001, epochs = 1000, Adam optimizer. After training, we evaluate the encodedfeatures derived from the middle tumour slice of each patient (containing the largest tumourcross-section). As our goal is to derive a set of useful features, we train the model on thefirst baseline scans of all patients to capture as much information as possible given our smallcohort.Feature analysis : Hierarchical agglomerative clustering on the pairwise Pearson cor-relation (r) between all extracted features from the first baseline scan identified linearlyindependent feature subgroups (independence was determined where r2>0.5). The intr-aclass correlation coefficient (ICC) was used to compare the repeatability of features (ICC= 1 indicating perfect repeatability, ICC > 0.5 indicating poor repeatability). Clinicalfeatures were combined with VAE features after normalization, and time to tumour recur-rence was modelled using multivariate Cox proportional hazard models with elastic-net L1penalty ratio = 0.9 and alpha = 0.17. The model was trained using 2-fold validation.3. ResultsFeature analysis : Bland-Altman (BA) plots in Figure 1a demonstrate excellent test-retest repeatability for all features (ICC = 0.86 – 0.98). These plots also demonstrate nosystematic bias or any outliers in features generated from the second baseline measurements,2Short Titlepotentially providing further evidence that the VAE is not-overtraining. Furthermore, acorrelation heatmap for baseline features demonstrated in Figure 1b indicates there areno clear correlations between any of the 7 features, suggesting that the VAE successfullyenforces this important characteristic in the derived features.Recurrence-free survival analysis : A feature importance plot is provided in 1c,summarising the coefficients for features in descending order; features 2 and 6 from theVAE rank highly alongside other clinical factors. To further demonstrate the predictivepower of feature 6, we demonstrate survival curves of four patient groups (1d) obtained bydiscretized feature 6 into four groups with equal patient sizes. It is evident that the finalbin (high value of feature 6) is associated with a higher risk of recurrence.Figure 1: (a) Bland Altman plot (b) Correlation Heatmap (c) Feature importance (d) Ka-plan Meier curveReferencesRobert J Gillies, Paul E Kinahan, and Hedvig Hricak. Radiomics: images are more thanpictures, they are data. Radiology , 278(2):563–577, 2016.Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, MatthewBotvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual3Mun Arthur Thrussell Winfield Koh Huang Messiou Doranconcepts with a constrained variational framework. In International conference on learn-ing representations , 2017.Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv , 1312.6114,2013.Timothy Sum Hon Mun, Imogen Thrussell, Jessica Winfield, Amani Arthur, David JCollins, Dow-Mu Koh, Paul Huang, Simon J Doran, Christian Messiou, and Matthew DBlackledge. Test-retest repeatability of data-driven radiomic features derived from adeep-learning model: Diffusion-weighted mri of soft-tissue sarcoma. ISMRM , 2022.AcknowledgmentsThis work was supported by the International Accelerator Award funded by Cancer Re-search UK [C56167/A29363], Associazione Italiana per la Ricerca sul Cancro [AIRC - 24297]and Fundacion Cient ı ca – Asociacion Espanola Contra el Cancer [Foundation AECC -GEACC19007MA]. We acknowledge Cancer Research UK and Engineering and PhysicalSciences Research Council support to the Cancer Imaging Centre at Institute of CancerResearch and Royal Marsden Hospital in association with Medical Research Council andDepartment of Health C1060/A10334, C1060/A16464 and National Health Service fundingto the National Institute for Health Research Biomedical Research Centre, Clinical ResearchFacility in Imaging and the Cancer Research NetworkThe report is independent research funded by the National Institute for Health Research.The views expressed in this publication are those of the author(s) and not necessarily those ofthe National Health Service, the National Institute for Health Research or the Departmentof Health. We also acknowledge the support of the Alan Turing Institute’s EnrichmentScheme.4 |
WvReNPBoB9F | Medical Imaging with Deep Learning 2023 Short Paper – MIDL 2023 submissionTemporal Monte Carlo Dropout for Robust UncertaintyQuantification: Application to Point-of-CareUltrasound-guided Nerve BlocksNishanth Thumbavanam Arun1nthumbav@andrew.cmu.eduLeonard Weiss2weissls2@upmc.eduAndrew Schoenling2schoenlingaj@upmc.eduMarek Radomski2radomskima@upmc.eduFrank Guyette2guyefx@upmc.eduNapoleon Roux3napoleon.p.roux.mil@health.milBrittany Daley3brittany.s.daley.ctr@health.milMichael J Morris3michael.j.morris34.civ@health.milHowie Choset1choset@cmu.eduJohn Galeotti1jgaleotti@cmu.edu1Carnegie Mellon University, Pittsburgh, PA2University of Pittsburgh Medical Center, Pittsburgh, PA3Brooke Army Medical Center, Fort Sam Houston, TXAbstractAccurate needle placement during nerve block procedures is essential for safe and effectiveanesthesia and pain management. However, tracking needles and nerves in an austeresetting using Point of Care Ultrasound (POCUS) can be challenging due to the complexityof the surrounding anatomy, the lack of real-time feedback and limited image quality. In thispaper, we propose a method for segmenting these structures and estimating the pixelwiseuncertainty using a novel approach: Temporal Monte Carlo Dropout. We demonstratethe effectiveness of our approach in POCUS with a stable probe, where it provides robustuncertainty estimates in challenging imaging scenarios while simultaneously tracking theneedle accurately. Our method obtains an 84% similarity score with uncertainty estimatesobtained from Monte Carlo Dropout with an 8x decrease in computational complexitywithout compromising segmentation performance. Importantly, it can be easily integratedinto existing POCUS workflows on portable devices and has the potential to benefit medicalpractitioners and patients alike.Keywords: POCUS AI, Bayesian Inference, Anatomic Segmentation, Needle, Nerve Block1. IntroductionPoint-of-care ultrasound (POCUS) has emerged as a powerful tool for rapid, accurate, andportable diagnosis and treatment, particularly in scenarios where traditional imaging modal-ities are not available or are impractical to use. POCUS can improve patient outcomes,reduce costs, and increase efficiency in the healthcare system(Kuo et al., 2020; Magalh ̃ aeset al., 2020; Smallwood and Dachsel, 2018). The provision for real-time visualization makesit an ideal tool for guiding nerve block procedures. Historically, nerve block needle place-ment has been guided by methods that are invasive, imprecise, and costly due to the needfor specialized equipment and personnel(Hadzic et al., 2003) (Choquet et al., 2012)Ensuring safety and efficacy of deep learning models in such ultrasound-guided proce-dures demands precise uncertainty estimation. Traditional Bayesian uncertainty estimation©2023 CC-BY-NC 4.0, N.T. Arun et al.Arun Weiss Schoenling Radomski Guyette Roux Daley Morris Choset Galeottimethods, such as Markov Chain Monte Carlo sampling(Van Ravenzwaaij et al., 2018), en-semble methods(Vrugt and Robinson, 2007), and Monte Carlo dropout(Camarasa et al.,2020), are computationally expensive, limiting real-time feedback. Our Temporal MonteCarlo Dropout method overcomes this by sampling once per frame with varying dropoutconfigurations across frames, maintaining reliable uncertainty estimates while reducing com-plexity and not compromising on segmentation performance.2. MethodsThe work was performed under IRB approvals from all investigators’ home institutions.The Peripheral Nerve Block dataset, comprising of recorded Adductor Canal (AC) blockprocedures, was collected at the Brooke Army Medical Center (BAMC) using a butterflyultrasound probe and anonymized for our access and use. We examined 15 different patientrecordings where needle placement was within 0.5 cm of the nerve and 16 different patientrecordings where needle placement was beyond 1 cm from the nerve. We include six ad-ditional anonymized femoral nerve block clips (labelled Negative) from the University ofPittsburgh Medical Center (UPMC) Department of Emergency Medicine database.We employ a Bayesian 3D U-Net encoder-decoder architecture for segmentation, usingtemporal volumes as inputs(Kendall and Gal, 2017). The model produces two outputs,the predictive mean ˆ μand variance ˆ σ2. We train the model with stochastic cross entropyloss accounting for aleatoric uncertainty(Kendall and Gal, 2017). The predicted outputfor a frame xiis given by [ˆ μ,ˆσ2aleatoric] =fθ(x), where fθdenotes the model parametrizedby the weights θ. To obtain the epistemic uncertainty maps, we use our novel temporalMonte Carlo dropout, which involves performing inference with each frame separately. Theuncertainty estimate for each pixel in a frame is obtained as the variance of ˆ μacross Mframes, each run with different dropout configurations σ2(xi) =1MPMi=1( ˆμi− ̄μ)2. Themodel was trained with the Adam optimizer(Kingma and Ba, 2014) with a learning rate of1e-4 and early stopping(Yao et al., 2007)3. Results and DiscussionWe compare the uncertainty maps generated using temporal MC dropout across M frameswith nontemporal MC dropout across N samples per frame where nontemporal MC dropoutrefers to the standard way of computing MC dropout with σ2(xi) =1NPNi=1( ˆμi− ̄μ)2forNMC samples per frame(a) (b)Figure 1: a) Example input frames (top) with their corresponding segmentations (bottom : Red- Needle, Green - Nerve, Blue - Vessel) and b) Needle (top) and nerve (bottom) uncertainty mapswith different MC configurationsWe use Structural SIMilarity (SSIM)(Hore and Ziou, 2010) to compare the uncertaintymaps, as SSIM has been shown to capture structural information accurately and corre-2Variational Sampling for POCUSlate well with human perception(Abdar et al., 2021). We show a few visualizations of thesegmentation and uncertainty maps in different configurations in Fig 1 and the SSIM com-parisons in Fig 2. Our results suggest a strong correlation between Nontemporal MonteCarlo (MC) sampling with N = 8 and temporal MC sampling with M = 8. This findingindicates that our approach is effective in generating structurally similar uncertainty mapswhile reducing the computational burden of running the model multiple times on a givenimage. We validate that sampling once per frame maintains segmentation performance bycomparing our method with popular UNet variants using needle tip error and needle/nervedetection as segmentation metrics, in table 1. Our method outperforms most alternativesand nearly matches the nontemporal variant with 8 MC samples per frame while reducingcomputational burden by a factor of 1/8.(a) Needle (b) NerveFigure 2: Mean +/- Std SSIM scores over the test set comparing needle uncertainty maps generatedwith different values of N (nontemporal monte carlo dropout samples) and M (temporal monte carlosamples across frames) (NT - Nontemporal T - Temporal)Table 1: Segmentation performanceModel Tip Error(cm) Needle Detection Nerve DetectionBayesian 3D UNet (ours) 0.23±0.18 100% 66%Bayesian 3D UNet (nontemporal) 0.22±0.17 100% 66%3D UNet 0.29 ±0.18 99% 45%Bayesian 2D UNet 0.32 ±0.26 88% 61%2D UNet 0.31 ±0.21 97% 49%UNet+LSTM 0.32 ±0.18 87% 50%4. ConclusionWe presented a novel method to extract uncertainty maps with significant computing ad-vantages over alternatives, and showed that we did not compromise on output quality andmodel accuracy. Further work can look at extending this algorithm to settings with movingprobes and to other imaging modalities.AcknowledgmentsThis material is based upon work supported by the Defense Advanced Research ProjectsAgency (DARPA) under Agreement No. HR001121900753Arun Weiss Schoenling Radomski Guyette Roux Daley Morris Choset GaleottiReferencesMoloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, MohammadGhavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al.A review of uncertainty quantification in deep learning: Techniques, applications andchallenges. Information Fusion , 76:243–297, 2021.Robin Camarasa, Daniel Bos, Jeroen Hendrikse, Paul Nederkoorn, Eline Kooi, Aad van derLugt, and Marleen de Bruijne. Quantitative comparison of monte-carlo dropout un-certainty measures for multi-class segmentation. In Uncertainty for Safe Utilization ofMachine Learning in Medical Imaging, and Graphs in Biomedical Image Analysis: Sec-ond International Workshop, UNSURE 2020, and Third International Workshop, GRAIL2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings2, pages 32–41. Springer, 2020.Olivier Choquet, Didier Morau, Philippe Biboulet, and Xavier Capdevila. Where shouldthe tip of the needle be located in ultrasound-guided peripheral nerve blocks? CurrentOpinion in Anaesthesiology , 25:596–602, 2012.Admir Hadzic, Jerry Vloka, Nihad Hadzic, Daniel M Thys, and Alan C Santos. Nervestimulators used for peripheral nerve blocks vary in their electrical characteristics. TheJournal of the American Society of Anesthesiologists , 98(4):969–974, 2003.Alain Hore and Djemel Ziou. Image quality metrics: Psnr vs. ssim. In 2010 20th interna-tional conference on pattern recognition , pages 2366–2369. IEEE, 2010.Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning forcomputer vision?, 2017. URL https://arxiv.org/abs/1703.04977 .Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXivpreprint arXiv:1412.6980 , 2014.Frederick H. Kuo, Holger M. Baumann, Pablo P ́ erez D’Empaire, and Yi Deng. Role of point-of-care ultrasound in the early stages of trauma care. Current Anesthesiology Reports ,10:69–79, 2020.Lu ́ ıs Magalh ̃ aes, Sara Martins, and Ramon Nogu ́ e. The role of point-of-care ultrasoundin the diagnosis and management of necrotizing soft tissue infections. The UltrasoundJournal , 12, 2020.Nicholas Smallwood and Martin Dachsel. Point-of-care ultrasound (pocus): unnecessarygadgetry or evidence-based medicine? Clinical medicine , 18 3:219–224, 2018.Don Van Ravenzwaaij, Pete Cassey, and Scott D Brown. A simple introduction to markovchain monte–carlo sampling. Psychonomic bulletin & review , 25(1):143–154, 2018.Jasper A Vrugt and Bruce A Robinson. Treatment of uncertainty using ensemble meth-ods: Comparison of sequential data assimilation and bayesian model averaging. WaterResources Research , 43(1), 2007.4Variational Sampling for POCUSYuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descentlearning. Constructive Approximation , 26(2):289–315, 2007.5 |
tfEylAl8vf | Medical Imaging with Deep Learning 2023FFCL: Forward-Forward Contrastive Learning for ImprovedMedical Image ClassificationMd. Atik Ahamed atikahamed@uky.eduJin Chen chen.jin@uky.eduAbdullah-Al-Zubaer Imran aimran@uky.eduUniversity of Kentucky, Lexington, KY, USAAbstractMedical image classification is one of the most important tasks for computer-aided di-agnosis. Deep learning models, particularly convolutional neural networks, have beensuccessfully used for disease classification from medical images, facilitated by automatedfeature learning. However, the diverse imaging modalities and clinical pathology makeit challenging to construct generalized and robust classifications. Towards improving themodel performance, we propose a novel pretraining approach, namely Forward ForwardContrastive Learning (FFCL) , which leverages the Forward-Forward Algorithm in acontrastive learning framework–both locally and globally. Our experimental results onthe chest X-ray dataset indicate that the proposed FFCL achieves superior performance(3.69% accuracy over ImageNet pretrained ResNet-18) over existing pretraining models inthe pneumonia classification task. Moreover, extensive ablation experiments support theparticular local and global contrastive pretraining design in FFCL.Keywords: CNN, Forward-Forward Algorithm, Back-propagation, Chest X-ray, Pneumonia1. IntroductionThe imperative for automated disease diagnosis is contingent upon the accurate classificationof medical images. In this context, deep convolutional neural networks (CNN) have proven tobe remarkably effective in performing medical image classification tasks, thereby significantlycontributing to the advancement of the automated diagnosis process. Nevertheless, CNNsexhibit limitations in terms of generalizability and the ability to capture fine details withininput images. To address these limitations, we propose a multistage pretraining method withback-propagation, which effectively improves the performance of medical image classificationand enhances model generalizability. According to the forward-forward algorithm (FFA)in Hinton (2022), there is no convincing evidence to suggest that our brain stores gradientsand undergoes learning via a back-propagation mechanism. Moreover, the majority of con-temporary disease classification tasks are carried out by training state-of-the-art deep learningmodels using the back-propagation approach. These models were typically trained fromscratch using medical images or fine-tuned based on the ImageNet-pretrained models (Denget al., 2009). However, ImageNet does not well represent the characteristics of images withinthe medical imaging domain, resulting in suboptimal model generalizability. In this project,we leverage supervised contrastive learning (Khosla et al., 2020) as a pretraining strategy,instead of directly utilizing back-propagation with weights pretrained on the ImageNetdataset. Existing studies demonstrate contrastive learning performed by only taking thefinal output of the model, which does not capture the fine image details. In this work, we©2023 CC-BY 4.0, M.A. Ahamed, J. Chen & A.-A. Imran.Ahamed Chen ImranBackboneBack-propagationNormalPneumoniaBackboneGlobal Contrastive Representation LearningBackboneLocal Contrastive Representation LearningFCFigure 1: The FFCL model and the multistage contrastive pretraining strategy. LCi&LCrepresentlocal & global contrastive loss, respectively. Bi, i= [1, n] represents a block of the target model.propose a multistage pretraining strategy, called FFCL, before performing back-propagation.We pretrain our backbone model in a supervised contrastive representation learning mannerlocally for each layer, and globally for the target model to capture fine details of the inputimage. Our pretraining strategy is inspired by the forward-forward algorithm (FFA). How-ever, our solution does not require manual fine-tuning of any thresholds, which helps themodel to capture local information automatically and reduce manual effort. To the best ofour knowledge, this is the first work leveraging contrastive learning in a forward-forwardmechanism in medical imaging. Notably, the proposed FFCL can be extended further totrain a model end-to-end without requiring any manual intervention in between.2. MethodsFFCL comprises two stages of pretraining before performing regular back-propagation forthe downstream classification tasks. Fig. 1 shows the FFCL framework, which comprisestwo pretraining stages (local and global contrastive representation learning) and the finalback-propagation-based image classification. In the first stage, we perform contrastivelearning locally, based on the modified Forward-Forward Algorithm (FFA) Hinton (2022).Unlike FFA, we leverage supervised contrastive learning for local updates at each layerof the target model without requiring tuning any thresholds. In the first stage, FFCLrandomly takes two images x1, x2∈Xtrain sampled from the train set ( Xtrain) and providesembeddings Ex1i, Ex2ifrom each block Bifollowed by ReLU (Agarap, 2018) activationlayer. We perform local updates utilizing cosine embedding loss LCifor each of the blocks(Bis). After performing local updates for each block, the first stage pretrained model is usedfor performing the global contrastive learning in the second stage. As in local contrastivelearning, this global contrastive learning takes two random images as input and maps tothe final embedding space ( Ex1, Ex2). The same cosine embedding loss is used for globalcontrastive learning (see Eq. (1)). In the third stage, the latest pretrained model is leveragedfor performing the actual downstream classification task with regular back-propagation.For both baseline and FFCL, we use the binary-cross-entropy loss for the downstreamclassification task. FFCL is elegant in the sense all the training stages are performed in afully automated manner without requiring any hyper-parameter tuning in between.Lc(Ex1, Ex2, Cx1, Cx2) =1−Ex1·Ex2∥Ex1∥2∥Ex2∥2, ifCx1=Cx2max0,Ex1·Ex2∥Ex1∥2∥Ex2∥2,ifCx1̸=Cx2(1)2FFCL: Forward-Forward Contrastive Learning for Improved Medical Image ClassificationTable 1: Pneumonia classification performance with the ResNet-18 and ResNet-34 backbone networks.We compare FFCL against regular backpropagation (RBP), with random and ImageNet pretrainedweight initializations. The ablation study is reported on FFCL using ResNet-18.Backbone Approach Contrastive Initialization Accuracy F1 Precision Recall AUCResNet-18RBP– ImageNet 76.76 69.84 85.94 69.10 92.90– random 73.08 63.14 84.95 64.10 89.15FFCLLocal→Global ImageNet 72.28 61.60 84.64 63.03 84.21Local→Global random 80.45 75.69 87.70 74.02 93.33Global→Local random 74.04 65.38 83.52 65.64 91.23Global only ImageNet 69.71 56.38 83.68 59.62 87.88Local only ImageNet 74.36 65.52 85.45 65.81 93.09Global only random 76.92 70.22 85.54 69.40 92.25Local only random 71.63 60.53 83.58 62.26 89.00ResNet-34RBP – ImageNet 78.53 72.71 86.77 71.45 94.09FFCL Local →Global random 78.85 73.32 86.51 71.97 94.43where, ( Ex1, Cx1) & ( Ex2, Cx2) denotes the embedding space and class label of the corre-sponding input image x1&x2.3. Experiments and ResultsWe perform binary classification (pneumonia vs. normal) from chest X-ray images employingthe ResNet-18 and ResNet-34 backbones (He et al., 2016). We evaluated the proposedmethod using the pediatric chest X-ray dataset Kermany et al. (2018). The dataset containsa total of 5856 chest X-ray images split into train (5232) and test (624). For our experiments,we further separated 262 images from the train set as the validation set which was used forselecting the best models. All the images were resized to 224 ×224×3 and 0–1 normalized(training from scratch) and ImageNet normalized (training from ImageNet weight) beforepassing them to the models. The models were trained with a Cosine Annealing scheduler(initial learning rate of 0.0001) using the Adam optimizer. We used a batch size of 10 andtrained for 100 epochs for all the models for downstream classification task.Table 1 compares the classification performance of our proposed approach against thebaselines. For ResNet-18 local only from scratch, one sample was always from the normalclass during forward-forward pretraining. For simplicity, we refer the regular backpropagationas RBP. Except accuracy and ROC-AUC, other reported metrics have been macro averaged.Table 1 demonstrates superior performance over the baseline RBP with an improvement of3.69% in terms of accuracy. Moreover, our proposed FFCL method outperforms state-of-the-art pneumonia classification methods: AUC of 77% Jaiswal et al. (2021), AUC of 78.4%Seyyed-Kalantari et al. (2020) and AUC 75% Liu et al. (2019).4. ConclusionsWe proposed a novel multistage contrastive pretraining strategy (FFCL) to enhance diseasedetection with state-of-the-art deep learning models. Our proposed FFCL-based pretrainingfully exploits the deep learning models’ learnability by performing local and global updates.Our extensive experimentation along with innovative ablation study confirmed the superiorityof FFCL over regular backpropagation training in performing pneumonia disease classification.3Ahamed Chen ImranOur ongoing efforts include evaluating on larger-scale datasets of varying diseases as well asadditional medical image analysis tasks.ReferencesAbien Fred Agarap. Deep learning using rectified linear units (relu). arXiv preprintarXiv:1803.08375 , 2018.Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: Alarge-scale hierarchical image database. In 2009 IEEE conference on computer vision andpattern recognition , pages 248–255. Ieee, 2009.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning forimage recognition. In Proceedings of the IEEE conference on computer vision and patternrecognition , pages 770–778, 2016.Geoffrey Hinton. The forward-forward algorithm: Some preliminary investigations. arXivpreprint arXiv:2212.13345 , 2022.Ajay Jaiswal, Tianhao Li, Cyprian Zander, Yan Han, Justin F Rousseau, Yifan Peng, andYing Ding. Scalp-supervised contrastive learning for cardiopulmonary disease classificationand localization in chest x-rays using patient metadata. In 2021 IEEE InternationalConference on Data Mining (ICDM) , pages 1132–1137. IEEE, 2021.Daniel Kermany, Kang Zhang, and Michael Goldbaum. Labeled optical coherence tomography(oct) and chest x-ray images for classification (2018). Mendeley Data, v2 https://doi.org/10.17632/rscbjbr9sj https://nihcc. app. box. com/v/ChestXray-NIHCC , 2018.Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola,Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. Advancesin neural information processing systems , 33:18661–18673, 2020.Jingyu Liu, Gangming Zhao, Yu Fei, Ming Zhang, Yizhou Wang, and Yizhou Yu. Align,attend and locate: Chest x-ray diagnosis via contrast induced attention network with lim-ited supervision. In Proceedings of the IEEE/CVF International Conference on ComputerVision , pages 10632–10641, 2019.Laleh Seyyed-Kalantari, Guanxiong Liu, Matthew McDermott, Irene Y Chen, and MarzyehGhassemi. Chexclusion: Fairness gaps in deep chest x-ray classifiers. In BIOCOMPUTING2021: proceedings of the Pacific symposium , pages 232–243. World Scientific, 2020.4 |
bFb3V8ALx4W | Medical Imaging with Deep LearningVisualizing chest X-ray dataset biases using GANsHao Liang hl106@rice.eduKangqi Ni kn22@rice.eduGuha Balakrishnan guha@rice.eduDepartment of Electrical and Computer Engineering, Rice University, USAAbstractRecent work demonstrates that images from various chest X-ray datasets contain visualfeatures that are strongly correlated with protected demographic attributes like race andgender. This finding raises issues of fairness, since some of these factors may be usedby downstream algorithms for clinical predictions. In this work, we propose a framework,using generative adversarial networks (GANs), to visualize what features are most differentbetween X-rays belonging to two demographic subgroups.Keywords: Chest X-rays, fairness, bias, explainability, generative adversarial networks(GANs)1. IntroductionRecent studies have demonstrated that patient bio-information like age, race, and genderare predictable from chest X-ray (CXR) images alone using deep learning models(Gichoyaet al., 2022; Karargyris et al., 2019; Duffy et al., 2022). For example, in the “ReadingRace” study, deep classifiers trained to predict race achieve 0 .99 AUROC on several CXRdatasets (Gichoya et al., 2022). This finding raises the question: “What visual cues dis-criminate different races?” Answering such a question can help mitigate potentially biasedbehavior of downstream algorithms that make decisions using this data. In this work, wepropose a framework to visually explain the principal differences between different demo-graphic subgroups in a medical imaging dataset. We first train an unconditional generativeadversarial network (GAN) (Goodfellow et al., 2020; Liang et al., 2020; Lin et al., 2022)on the given image dataset. Next, we project the images onto the (trained) GAN’s latentspace and compute a direction in the latent space that differentiates a pair of classes (e.g.,“Black” vs. “White” race groups). We traverse the latent space along that direction toproduce image sequences that depict the main morphological and appearance changes inmoving from one class to another.There are related works that focus on visualizing subgroup differences associated withclinical attributes. One such study uses autoencoders (Cohen et al., 2021), which oftenproduce blurry samples that do not clearly capture structural information. Others trainconditional versions of GANs (Singla et al., 2023; Dravid et al., 2022), an expensive processsince the GAN must be trained from scratch for each attribute of interest. In contrast toall these works, we demonstrate that deep generative models may be a useful tool to themedical imaging community to understand the biases within a medical imaging dataset.2. MethodOur method consists of several components, visualized in Fig. 2 and described below.Generator training: We train an unconditional StyleGAN2 generator (Karras et al.,2020a) G(·) :Rd→RH×W×1, following the default training procedure introduced in that©CC-BY 4.0, H. Liang, K. Ni & G. Balakrishnan.Liang Ni BalakrishnanFigure 1: Framework of our proposed method. (a) We train a GAN on an imagedataset, and a binary classifier on the images and labels for a demographic pre-diction task (e.g., White vs. Black race). (b) We project a subset of imagesonto the trained GAN’s latent space. To ensure the projected images are rea-sonably reconstructed, we only keep projected images whose labels (predicted bythe attribute classifier trained in (a)) agree with their original labels. We alsofit an SVM hyperplane to separate the two classes in the latent space. Finally,we visualize the differences between the classes by starting at a latent code cor-responding to a random image, and traversing along the normal direction of theSVM hyperplane, to generate a sequence of images showing a transformation.paper. dis the dimension of the “latent space” of the generator, and HandWare the heightand width of the generated CXR. In our experiments, we trained G(·) on Chexpert (Irvinet al., 2019), a large public dataset containing 224 ,316 CXRs. We only used frontal views,yielding 164 ,548 CXRs. The training procedure takes roughly 24 hours on two Nvidia A100GPUs.Attribute classifier training: We train a separate deep attribute classifier C(·) :RH×W×1→R1for each per-image binary attribute provided in the dataset. For multi-classlabels such as race, we train a separate binary classifier for each pair of races.Image projection/SVM training: Next, we follow the process introduced in (Karraset al., 2020b) to project a subset of CXR images {Xi}Ni=1onto G’s latent space, yieldinglatent codes {zi}Ni=1. We only retain those projected images whose labels (predicted by C)are the same as the original labels {Li}Ni=1, i.e., C(G(zi)) =Li. We then train a linear SVMto predict Lifrom zi.Image sequence generation: The normal vector vof the trained SVM’s hyperplaneidentifies the direction that best differentiates the two classes. We will use this fact togenerate image sequences depicting the principal perceptual changes needed to convert aCXR belonging to one demographic class to another. In particular, we select the latentvector corresponding to a random dataset CXR, and move towards the opposite class inlatent space in the direction of v. We concatenate images generated by intermediate latentcodes along this traversal to produce a sequence.2Visual explanationFigure 2: Sample visualization results. The left column corresponds to the projectedinitial image and the last three columns show images generated at different traver-sal distances in the latent space. The red text indicates the output probabilitiespredicted by the attribute classifier for each class. For example, the top left [0.98,0.01] indicate the CXR has a 98% possibility of being white and 1% possibilityof being black. We also use red boxes to highlight the areas that visually varythe most. For White/Black, the shoulder bone and right lung structures changeshape, and the lungs become more opaque. For Asian/White, the entire chestshape changes and grows larger. These visualizations also explain why the Read-ing Race study (Gichoya et al., 2022) did not find race prediction to significantlychange when blocking local regions. The proposed applied to Cardiomegaly en-larges the heart, in agreement with the known effect of that disease.3. Results and discussionWe demonstrate our framework on ChexPert with race as the target attribute. We alsovalidate our approach on the clinical attribute Cardiomegaly , which induces a known phys-iological change (enlarged heart). Sample results are shown and explained in Fig. 2.Conclusion Our results show that an unconditional generative adversarial network canbe a useful tool for visualizing differences between demographic groups of a CXR dataset.Our framework is fast and flexible, and can be applied to any binary attribute labels inthe dataset. Future work includes analyzing generated sequences to thoroughly investigatedemographic differences, and comparing results across different generative models.3Liang Ni BalakrishnanReferencesJoseph Paul Cohen, Rupert Brooks, Sovann En, Evan Zucker, Anuj Pareek, Matthew PLungren, and Akshay Chaudhari. Gifsplanation via latent shift: a simple autoencoderapproach to counterfactual generation for chest x-rays. In Medical Imaging with DeepLearning , pages 74–104. PMLR, 2021.Amil Dravid, Florian Schiffers, Boqing Gong, and Aggelos K Katsaggelos. medxgan: Visualexplanations for medical classifiers through a generative latent space. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2936–2945,2022.Grant Duffy, Shoa L Clarke, Matthew Christensen, Bryan He, Neal Yuan, Susan Cheng, andDavid Ouyang. Confounders mediate ai prediction of demographics in medical imaging.npj Digital Medicine , 5(1):188, 2022.Judy Wawira Gichoya, Imon Banerjee, Ananth Reddy Bhimireddy, John L Burns, Leo An-thony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, et al. Ai recognition of patient race in medical imaging: a modelling study.The Lancet Digital Health , 4(6):e406–e414, 2022.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Commu-nications of the ACM , 63(11):139–144, 2020.Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute,Henrik Marklund, Behzad Haghgoo, Robyn Ball, Katie Shpanskaya, et al. Chexpert:A large chest radiograph dataset with uncertainty labels and expert comparison. InProceedings of the AAAI conference on artificial intelligence , volume 33, pages 590–597,2019.Alexandros Karargyris, Satyananda Kashyap, Joy T Wu, Arjun Sharma, Mehdi Moradi, andTanveer Syeda-Mahmood. Age prediction using a large chest x-ray dataset. In MedicalImaging 2019: Computer-Aided Diagnosis , volume 10950, pages 468–476. SPIE, 2019.Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and TimoAila. Training generative adversarial networks with limited data. Advances in neuralinformation processing systems , 33:12104–12114, 2020a.Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila.Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 8110–8119, 2020b.Hao Liang, Lulan Yu, Guikang Xu, Bhiksha Raj, and Rita Singh. Controlled autoencodersto generate faces from voices. In Advances in Visual Computing: 15th InternationalSymposium, ISVC 2020, San Diego, CA, USA, October 5–7, 2020, Proceedings, Part I15, pages 476–487. Springer, 2020.4Visual explanationZinan Lin, Hao Liang, Giulia Fanti, and Vyas Sekar. Raregan: Generating samples forrare classes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36,pages 7506–7515, 2022.Sumedha Singla, Motahhare Eslami, Brian Pollack, Stephen Wallace, and Kayhan Bat-manghelich. Explaining the black-box smoothly—a counterfactual approach. MedicalImage Analysis , 84:102721, 2023.5 |
OaG7pYqbs7 | Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submissionAssessing Deep Learning Methodologies for AutomaticSegmentation of the Velopharyngeal MechanismJiebei Liu1mcu2xn@virginia.eduDon Brown1,2deb@virginia.eduStephen Baek2baek@virginia.eduKazlin Mason3kazlin.mason@virginia.edu1Department of Systems and Information Engineering, University of Virginia, Charlottesville,VA,US.2Data Science Institute, University of Virginia, Charlottesville, VA, US.3Department of Human Services, University of Virginia, Charlottesville, VA, US.Editors: Under Review for MIDL 2023AbstractVelopharyngeal dysfunction (VPD) results in speech, resonance, and swallowing difficultiesdue to inadequate separation of oral and nasal cavities by the velopharyngeal muscula-ture. Diagnosing and treating VPD often involve multidisciplinary evaluation and spe-cialized imaging techniques like videofluoroscopy or nasendoscopy. However, recent MRIapplications have enabled non-invasive visualization of the vocal tract and VP mechanism,providing insights into the shape, size, movement, and position. In order to obtain thisdata, however, manual techniques are necessary, and analyses of 3D MRI data are time-consuming and not yet clinically feasible. This article aims to explore the feasibility of 3Dmedical image deep learning methods for segmenting soft palate, levator muscle, pharyn-geal wall, and adenoids in the velopharyngeal region, overcoming current limitations andcontributing to future clinical translation of this assessment methodology.Keywords: Semantic Segmentation, Velopharyngeal Dysfunction, Velopharyngeal Imag-ing1. IntroductionThe velopharyngeal mechanism is a crucial and complex structure responsible for separatingthe oral and nasal cavities during speech and swallowing, and its dysfunction (VPD) can becaused by various factors such as, cleft palate and other craniofacial conditions resulting ininsufficient palatal tissue, aberrant insertion of the levator veli palatini (LVP) muscle, or acongenitally deep nasopharynx, among others(Woo, 2012). Cleft palate or submucous cleftpalate is one of the leading causes of VPD, affecting around 1.5 per 1000 live births(Allamet al., 2014). Even after surgical intervention to repair a cleft, VPD can still persist inapproximately 20-30% of patients (Witt et al., 1998; Sullivan et al., 2011) and impair theirspeech and resonance function.Measuring the size, shape, and position of the structures involved in achieving velopha-ryngeal closure is challenging due to the complexity of this region, particularly given thelimitation of standard imaging methods such as nasendoscopy and videofluoroscopy. Thesespecialized imaging techniques can be invasive, cause discomfort, and expose children toionizing radiation. However, accurate assessment is essential for diagnosing and treatingVPD. Therefore, non-invasive methods that accurately evaluate velopharyngeal function©2023 CC-BY 4.0, J. Liu, D. Brown, S. Baek & K. Mason.Liu Brown Baek Masonare needed in clinical practice. MRI is a versatile and non-invasive imaging technique in-creasingly used in speech assessment for patients with VPD(Mason and Perry, 2017; Mason,2022) and has been suggested as a potential tool. However, utilization of MRI data, par-ticularly 3D data analysis, is time-consuming and not yet clinically feasible.Recently, deep learning-based methods have been developed to segment the entire vocaltract and articulators in MRIs (Ruthven et al., 2021; Erattakulangara and Lingala, 2020).However, these methods mainly focus on 2D MRI and have not been trained to address theunderlying velopharyngeal musculature, such as the LVP muscle, velopharyngeal structures,and nasopharyngeal airway.This pilot study aims to explore the feasibility of applying 3D medical image deep learningmethods to segment key velopharyngeal structures and muscles. The proposed method mayprovide non-invasive anatomical image support to improve the assessment of the velopharyn-geal anatomy, reducing the need for invasive assessment procedures and improving patientcomfort, especially for children.2. Dataset and MethodWe conducted our pilot experiment using 50 T1-weighted whole-head 3D MRI scans inchildren with normal anatomy collected from the University of Virginia Health System.Segmented annotations for six specific velopharyngeal structures were completed: Adenoids,Lateral Pharyngeal Wall (LPW), Levator Veli Palatini (LVP), Posterior Pharyngeal Wall(PPW), Pterygoid Raphe (PR), and Soft Palate. Figure 1( a) displays a 3D representation ofthe annotations. The models were trained using a 32GB NVIDIA V100 GPU for 200 epochs,utilizing DiceCELoss as the loss function. 3D Unet(C ̧i ̧ cek et al., 2016), nnU-Net(Isenseeet al., 2020), Swin UNETR(Hatamizadeh et al., 2022), and 3DUX-Net(Lee et al., 2023)models were assessed for accuracy of automated segmentation tasks. All experiments wereconducted using five-fold cross-validation schemes with a ratio of 80:20.3. ResultsTable 1: Dice score comparisonModel Adenoids LPW LVP PPW PR Soft Palate Avg3DU-Net 0.6718 0.5815 0.3970 0.6181 0.3843 0.7071 0.5600nn-UNet 0.7589 0.6374 0.5152 0.7135 0.5025 0.8253 0.6588SwinUNETR 0.7901 0.6424 0.5592 0.7213 0.5205 0.8541 0.68133DUX-Net 0.8218 0.6482 0.5638 0.7194 0.5261 0.8344 0.6856Table 1 offers a dice score comparative of state-of-the-art(SOTA) transformer and con-volutional neural network (ConvNet) models in the context of medical image segmentationfor volumetric settings, including 3D Unet, nnU-Net, Swin UNETR, and 3DUX-Net. Fig-ure 1( b) shows the qualitative representations of tissue segmentation, arranged from topto bottom, displaying the middle slice of the coronal, axial, and midsagittal planes. Eachstructure is delineated by a unique color for clarity: Adenoids (dark blue), LPW (light2Short Titleblue), LVP (green), PPW (yellow), PR (pink), and Soft Palate (red).By implementing minor modifications and fine-tuning, these models have yielded promisingresults when applied to velopharyngeal data. In particular, the segmentation of the softpalate and adenoids has exceeded a performance metric of 0.80, demonstrating their efficacyfor this specific application.(a) Annotation (b) Qualitative representations of tissues segmentationFigure 1: 3D Annotation and Tissues Segmentation4. Conclusions & OutlookThis study used SOTA transformer and ConvNet deep learning models to segment velopha-ryngeal tissues. Our findings demonstrate that 3D segmentation models have the potentialto aid in the anatomical analysis of velopharyngeal anatomy. Using a limited dataset ofonly 50 T1-weighted MRIs, the existing models achieved promising results, particularly forautomatic segmentations of the soft palate and adenoids. With additional data sources, weexpect model parameters to improve and result in high accuracy for automated segmenta-tion of the velopharyngeal region.AcknowledgmentsThis work was supported in part by the National Center for Advancing Translational Sci-ences of the National Institutes of Health under Award Numbers KL2TR003016 (Mason).The content is solely the responsibility of the authors and does not necessarily representthe official views of the National Institutes of Health.3Liu Brown Baek MasonReferencesE Allam, L Windsor, and C Stone. Cleft lip and palate: etiology, epidemiology, preventiveand intervention strategies. Anat Physiol , 4(3):1–6, 2014.Subin Erattakulangara and Sajan Goud Lingala. Airway segmentation in speech mri us-ing the u-net architecture. In 2020 IEEE 17th International Symposium on BiomedicalImaging (ISBI) , pages 1887–1890, 2020. doi: 10.1109/ISBI45749.2020.9098536.Ali Hatamizadeh, Vishwesh Nath, Yucheng Tang, Dong Yang, Holger Roth, and DaguangXu. Swin unetr: Swin transformers for semantic segmentation of brain tumors in mriimages, 2022.Fabian Isensee, Paul F. Jaeger, Peter M. Full, Philipp Vollmuth, and Klaus Maier-Hein.nnu-net for brain tumor segmentation. ArXiv , abs/2011.00848, 2020.Ho Hin Lee, Shunxing Bao, Yuankai Huo, and Bennett A. Landman. 3d UX-net: A largekernel volumetric convnet modernizing hierarchical transformer for medical image seg-mentation. In The Eleventh International Conference on Learning Representations , 2023.URL https://openreview.net/forum?id=wsZsjOSytRA .Kazlin Mason and Jamie Perry. The use of magnetic resonance imaging (mri) for the studyof the velopharynx. Perspectives of the ASHA Special Interest Groups , 2:35, 12 2017. doi:10.1044/persp2.SIG5.35.Kazlin N. Mason. Magnetic resonance imaging for assessing velopharyngeal function: Cur-rent applications, barriers, and potential for future clinical translation in the united states.The Cleft Palate Craniofacial Journal , 0(0):10556656221123916, 2022. doi: 10.1177/10556656221123916. URL https://doi.org/10.1177/10556656221123916 . PMID:36039513.Matthieu Ruthven, Marc E Miquel, and Andrew P King. Deep-learning-based segmentationof the vocal tract and articulators in real-time magnetic resonance images of speech.Computer Methods and Programs in Biomedicine , 198:105814, 2021.Stephen R Sullivan, Sivabalan Vasudavan, Eileen M Marrinan, and John B Mulliken. Sub-mucous cleft palate and velopharyngeal insufficiency: comparison of speech outcomesusing three operative techniques by one surgeon. The Cleft palate-craniofacial journal ,48(5):561–570, 2011.Peter D Witt, John C Wahlen, Jeffrey L Marsh, Lynn Marty Grames, and Thomas KPilgram. The effect of surgeon experience on velopharyngeal functional outcome followingpalatoplasty: is there a learning curve? Plastic and reconstructive surgery , 102(5):1375–1384, 1998.Albert S Woo. Velopharyngeal dysfunction. In Seminars in plastic surgery , volume 26,pages 170–177. Thieme Medical Publishers, 2012.4Short Title ̈Ozg ̈ un C ̧i ̧ cek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, and Olaf Ron-neberger. 3d u-net: Learning dense volumetric segmentation from sparse annotation.InInternational Conference on Medical Image Computing and Computer-Assisted Inter-vention , 2016.5 |
l_Zxaj0PZE | Medical Imaging with Deep Learning 2023Make nnUNets Small AgainMattias P. Heinrich heinrich@imi.uni-luebeck.deInstitute of Medical Informatics, University of Luebeck, GermanyJannis Hagenah jannis.hagenah@eng.ox.ac.ukDepartment of Engineering Science, University of Oxford, UKAbstractAutomatic high-quality segmentations have become ubiquitous in numerous downstreamtasks of medical image analysis, i.e. shape-based pathology classication or semanticallyguided image registration. Public frameworks for 3D U-Nets provide numerous pre-trainedmodels for nearly all anatomies in CT scans. Yet, the great generalisation comes at the costof very heavy networks with millions of parameter and trillions of oating point operationsfor every single model in even larger ensembles. We present a novel combination of twoorthogonal approaches to lower the computational (and environmental) burden of U-Nets:namely partial convolution and structural re-parameterization that tackle the intertwinedchallenges while keeping real world latency small.Keywords: 3D semantic segmentation, model distillation, ecient convolutions1. IntroductionWhen designing convolutional network architectures, the recent trend has moved away fromthe classic VGG style (Simonyan and Zisserman, 2014) that uses few plain 3 3spatialconvolutions with large channel sizes and carries a large parameter count. Instead, depth-separable or group convolutions are frequently seen (Sandler et al., 2018; Ma et al., 2018)in an eort to balance model size, multiply-add operations (MADs) and accuracy. In fact,the recent ConvNeXt architecture (Liu et al., 2022) could demonstrate that replacing VGG-style or ResNet-like convolutional blocks can outperform vision transformers with a verylow MADs count. They achieve this by using a clever design of the convolutional block witha large kernel depth-separable convolution followed by an inverted bottleneck with 1 1pointwise convolutions together with micro-design improvements.Unfortunately, as shown in (Chen et al., 2023; Vasu et al., 2022) the number of rawoperations is poorly correlated with latency (runtime) due to the much lower eciencyof modern parallel processors that are optimised for dense 3 3 convolutions with largechannel sizes to reach impressive theoretical FLOPS (oating points per second)1of morethan 10 trillion (10 TFLOPS). Our experimental validation conrms, that in an eort tolower the MADs the GPU utilisation (and hence eciency) also massively drops leadingto diminishing returns or in the worst case even longer runtimes for smaller models. Adefault conguration of the nnUNet (Isensee et al., 2021) typically requires >30 millionparameters and >1 trillion MADs to segment a single 3D patch. Nevertheless, the trainingand inference algorithms are considered to be relatively ecient, due to the aforementionedhigh GPU utilisation of modern Nvidia GPUs.1. Note that we purposefully use MADs and FLOPS to dierentiate between computational work load andthe hardware GPU capabilities at 100% utilisation.©2023 CC-BY 4.0, M.P. Heinrich & J. Hagenah.Heinrich Hagenahinherent in natural images, a common stem cell will aggres-sively downsample the input images to an appropriate featuremap size in both standard ConvNets and vision Transformers.The stem cell in standard ResNet contains a 7⇥7 convolutionlayer with stride 2, followed by a max pool, which resultsin a 4⇥downsampling of the input images. In vision Trans-formers, a more aggressive “patchify” strategy is used asthe stem cell, which corresponds to a large kernel size (e.g.kernel size = 14 or 16) and non-overlapping convolution.Swin Transformer uses a similar “patchify” layer, but witha smaller patch size of 4 to accommodate the architecture’smulti-stage design. We replace the ResNet-style stem cellwith a patchify layer implemented using a 4⇥4, stride 4 con-volutional layer. The accuracy has changed from 79.4% to79.5%. This suggests that the stem cell in a ResNet may besubstituted with a simpler “patchify” layer à la ViT whichwill result in similar performance.We will use the “patchify stem” (4⇥4 non-overlappingconvolution) in the network.2.3. ResNeXt-ifyIn this part, we attempt to adopt the idea of ResNeXt [87],which has a better FLOPs/accuracy trade-off than a vanillaResNet. The core component is grouped convolution, wherethe convolutional filters are separated into different groups.At a high level, ResNeXt’s guiding principle is to “use moregroups, expand width”. More precisely, ResNeXt employsgrouped convolution for the 3⇥3 conv layer in a bottleneckblock. As this significantly reduces the FLOPs, the networkwidth is expanded to compensate for the capacity loss.In our case we use depthwise convolution, a special caseof grouped convolution where the number of groups equalsthe number of channels. Depthwise conv has been popular-ized by MobileNet [34] and Xception [11]. We note thatdepthwise convolution is similar to the weighted sum op-eration in self-attention, which operates on a per-channelbasis,i.e., only mixing information in the spatial dimension.The combination of depthwise conv and1⇥1convs leadsto a separation of spatial and channel mixing, a propertyshared by vision Transformers, where each operation eithermixes information across spatial or channel dimension, butnot both. The use of depthwise convolution effectively re-duces the network FLOPs and, as expected, the accuracy.Following the strategy proposed in ResNeXt, we increase thenetwork width to the same number of channels as Swin-T’s(from 64 to 96). This brings the network performance to80.5% with increased FLOPs (5.3G).We will now employ the ResNeXt design.2.4. Inverted BottleneckOne important design in every Transformer block is that itcreates an inverted bottleneck,i.e., the hidden dimension ofthe MLP block is four times wider than the input dimension(a)d3×3,96➝961×1,384➝961×1,96➝384d3×3,384➝3841×1,96➝3841×1,384➝961×1,96➝384d3×3,96➝961×1,384➝96(b)(c)Figure 3.Block modifications and resulted specifications. (a)isa ResNeXt block; in(b)we create an inverted bottleneck block andin(c)the position of the spatial depthwise conv layer is moved up.(see Figure4). Interestingly, this Transformer design is con-nected to the inverted bottleneck design with an expansionratio of 4 used in ConvNets. The idea was popularized byMobileNetV2 [61], and has subsequently gained traction inseveral advanced ConvNet architectures [70,71].Here we explore the inverted bottleneck design. Figure3(a) to (b) illustrate the configurations. Despite the increasedFLOPs for the depthwise convolution layer, this changereduces the whole network FLOPs to 4.6G, due to the signif-icant FLOPs reduction in the downsampling residual blocks’shortcut 1⇥1 conv layer. Interestingly, this results in slightlyimproved performance (80.5% to 80.6%). In the ResNet-200/ Swin-B regime, this step brings even more gain (81.9% to82.6%) also with reduced FLOPs.We will now use inverted bottlenecks.2.5. Large Kernel SizesIn this part of the exploration, we focus on the behav-ior of large convolutional kernels. One of the most distin-guishing aspects of vision Transformers is their non-localself-attention, which enables each layer to have a globalreceptive field. While large kernel sizes have been used inthe past with ConvNets [40,68], the gold standard (popular-ized by VGGNet [65]) is to stack small kernel-sized (3⇥3)conv layers, which have efficient hardware implementationson modern GPUs [41]. Although Swin Transformers rein-troduced the local window to the self-attention block, thewindow size is at least 7⇥7, significantly larger than theResNe(X)t kernel size of 3⇥3. Here we revisit the use oflarge kernel-sized convolutions for ConvNets.Moving up depthwise conv layer.To explore large kernels,one prerequisite is to move up the position of the depthwiseconv layer (Figure3(b) to (c)). That is a design decisionalso evident in Transformers: the MSA block is placed priorto the MLP layers. As we have an inverted bottleneck block,this is a natural design choice — the complex/inefficientmodules (MSA, large-kernel conv) will have fewer channels,while the efficient, dense 1⇥1 layers will do the heavy lifting.This intermediate step reduces the FLOPs to 4.1G, resultingin a temporary performance degradation to 79.9%.Increasing the kernel size.With all of these preparations,the benefit of adopting larger kernel-sized convolutions is sig-11979d3x3x3 160➞1601x1x1 160➞6401x1x1 640➞1603x3x3 320➞320ConvNeXtnnUNetnnUNet-half3x3x3 160➞1603x3x3 80➞1601x1x1 240➞4801x1x1 160➞80ours (train)InstanceNormGELUInstanceNorm + lReLUBatchNorm + lReLUBatchNorm1x1x1 480➞240lReLUours fused (inference)3x3x3 80➞801x1x1 240➞240lReLU#param 379k 4k + 2x100k#MADs 453G 0.02+2x0.27 Truntime 631ms efficiency 3.5%#param 668k#MADs 1747Gruntime 257msefficiency 33%#param 231#MADs 605Gruntime 179msefficiency* 54%#param 2765k#MADs 7249Gruntime 412ms efficiency 85%#param 691k#MADs 1812Gruntime 110ms efficiency 79%#multiply-add operations0G2000G4000G6000G8000Gruntime (smaller better)0ms250ms500ms750ms231k668k379k691k2'765kbubble size = #parameter (smaller better)ours fusedhalf nnUNetours trainnnUNetConvNeXtExperiments for 5 layers with input size 2 x 320* x 64 x 64 x 64 (*or 160)Figure 1: Concept of proposed FasterFusion block and measured inference runtimes of var-ious choices for (grouped) spatial and pointwise convolution operators. Our con-cept yields a high GPU utilisation 54% close to 3 33 convolutions, whereasConvNeXt fails to convert a lower operations into inference speed.Contribution: In this work, we present a novel combination of two orthogonal ap-proaches that tackle the computational burden and make nnUNets small again. Our modelemploys a new variant of T-shaped spatial convolutions that act only on a part of the chan-nels individually together with full-depth pointwise convolutions. Dierent to (Chen et al.,2023) we also incorporate a novel version of re-parameterisation as popularised by RepVGG(Ding et al., 2021) that enables the fusion of the inverted bottleneck FasterFusion . Com-bined these contributions lead to 3-4 smaller model sizes and 2 faster inference times,while matching the accuracy and stable training convergence of full-sized models.2. MethodsBuilding upon T-shaped spatial convolution (Chen et al., 2023) that perform the spatial333 kernels only on a part (in our work a quarter) of channels and use pointwise operatorsfor the remaining one, we aim to nd a good balance between reducing parameters, keepinga reasonable number of computations for training and a method that yields the fastestspeed at inference. Our approach uses an inverted bottleneck with an intermediate doubledchannel size to limit peak memory when using the same number of input and output channelsas the blocks within the nnUNet (Isensee et al., 2021). In contrast to (Chen et al., 2023), weplace a BatchNorm and no non-linearity between rst and second convolution in our block.This enables us to apply re-parameterisation after training and completely fuse all threeconsecutive layers at inference and reduce the parameters by 2.9 . Our method can beused as drop-in replacement in any 2D or 3D convolutional network, yet in this rst proof-of-concept we restrict ourselves to the popular 3D semantic segmentation architecture ofthe nnUNet.Fig. 1 demonstrates the disproportionated eciency of plain 3 33 spatial convo-lutions with large channel size in comparison to depth-separable, groupwise and pointwiseconvolutions (when comparing nnUNet with ConvNeXt). Details on the implementationof training and fusion, which only requires us to perform a number of matrix multipli-cations on the respective weights once after training, are found in our open source code:2Make nnUNets Small Againhttps://github.com/mattiaspaul/makennunetsmallagain . Intuitively, a larger numberof trainable parameters and oating point operations will ease training whereas the fusionof blocks - re-parameterisation - increases eciency for inference and substantially reducesthe size of models to be stored.Dice Avg (Std)Percentiles 25/50/75Parametersbaseline nnUNet89.7 (16.8)92/95/9730’600khalf nnUNet74.9 (26.0)68/88/947’656kConvNeXt88.2 (14.4)88/93/954’084kours FasterFusion90.5 (15.6)93/95/975’834khalf nnUNetours FasterFusiontraining epochsFigure 2: Left: Validation Dice over epochs shows more robust training of our method com-pared to other low-parameter models. Middle: Quantitative results demonstrateimproved quality and 81% reduced parameters (at inference). Right: Visual seg-mentation examples show that using an nnUNet with halved channel sizes yieldssome inaccurate vertebrae.3. Experimental results and ConclusionWe evaluate all method variants for the VerSe19 vertebrae multi-label segmentation taskwithin the nnUNet framework, by re-orienting all patients into a prone pose with headsup and without mirror augmentation but otherwise default parameters. All models wheretrained on a single RTX-A4000 with 16 GB for 150 epochs on 180 training scans andevaluated on 22 test cases showing on average 10 out of 25 vertebrae. Note, that we replacedthe default nnUNet layers only with our FasterFusion blocks when the channel size was 160or above, since we found for small kernels the model size and runtime improvements werenegligible. Fig. 2 highlights the fact that while using 5 fewer parameters our approachmatches the quality of full-sized nnUNets. Our model excels with the highest average Dice of90.5% and does not suer from slow or unstable training progress as the half-sized nnUNetand ConvNeXt variants respectively.Conclusion: Our work and its experimental ndings indicate that applying T-shapedconvolutions { in which 3 33 kernels only act partially on the input channel width {together with pointwise operators within an specically designed inverted bottleneck andre-parameterisation oers an exciting new strategy for better balancing model sizes, trainingeort and computational burden of the inference of deep segmentation networks in medicalimaging and beyond.AcknowledgmentsI would like to thank Alex Bigalke for careful proof-reading.3Heinrich HagenahReferencesJierun Chen, Shiu-hong Kao, Hao He, Weipeng Zhuo, Song Wen, Chul-Ho Lee, and S-H Gary Chan. Run, don't walk: Chasing higher ops for faster neural networks. arXivpreprint arXiv:2303.03667 , 2023.Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and JianSun. Repvgg: Making vgg-style convnets great again. In Proceedings of the IEEE/CVFconference on computer vision and pattern recognition , pages 13733{13742, 2021.Fabian Isensee, Paul F Jaeger, Simon AA Kohl, Jens Petersen, and Klaus H Maier-Hein.nnu-net: a self-conguring method for deep learning-based biomedical image segmenta-tion. Nature methods , 18(2):203{211, 2021.Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, andSaining Xie. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 11976{11986, 2022.Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shuenet v2: Practicalguidelines for ecient cnn architecture design. In Proceedings of the European conferenceon computer vision (ECCV) , pages 116{131, 2018.Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen.Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE con-ference on computer vision and pattern recognition , pages 4510{4520, 2018.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scaleimage recognition. arXiv preprint arXiv:1409.1556 , 2014.Pavan Kumar Anasosalu Vasu, James Gabriel, Je Zhu, Oncel Tuzel, and Anurag Ranjan.An improved one millisecond mobile backbone. arXiv preprint arXiv:2206.04040 , 2022.4 |
kvpAErerdkc | Medical Imaging with Deep LearningUncertainty for Proximal Femur Fractures ClassificationMayar Lotfy∗mayar.mostafa@tum.deChair of Computer Aided Medical Procedures, Technical University of Munich, GermanySelina Frenner∗selina.frenner@gmail.comMarc Beirer marc.beirer@sbk-vs.dePeter Biberthaler peter.biberthaler@tum.deTUM School of Medicine, Technical University of Munich, GermanyShadi Albarqouni shadi.albarqouni@ukbonn.deClinic for Diagnostic and Interventional Radiology, University Hospital Bonn, GermanyHelmholtz AI, Helmholtz Munich, GermanyAbstractDeep Learning methods over the past years provided high-performance solutions for medicalapplications. Yet, robustness and quality control are still required for clinical applicability.In this work, the uncertainty of proximal femur fracture classification was modeled. Weintroduce a reliability measure to our predictive model using the Monte Carlo Dropoutapproach. We performed an extensive quantitative and qualitative analysis to validate theresults. We further exposed the results to expert physicians in order to get feedback onthe model’s performance and uncertainty measures. Results demonstrate a positive corre-lation between the misclassification of the model’s prediction and high uncertainty scores.Additionally, the uncertainty measures are mimicking the actual radiologist’s uncertaintyfor challenging examples reflected on intra- and inter- experts variability.Keywords: Deep Learning, Uncertainty, Quality Control, Radiology, Proximal femur frac-tures.1. IntroductionIn the realm of medical diagnosis, Deep Learning has emerged as a powerful tool for patternrecognition, showcasing remarkable advancements in recent years. Studies have shown thatintegrating automated AI models with emergency medicine clinicians significantly enhancestheir ability to accurately detect fractures, ultimately improving patient outcomes (Lind-sey et al., 2018). Among the most prevalent fractures globally are proximal femur frac-tures, where prompt diagnosis and treatment are crucial for patient well-being and evensurvival (Schroeder et al., 2022). However, accurate diagnosis heavily relies on the experi-ence of medical professionals (Plant et al., 2015). To further enhance diagnostic accuracy,computer-aided diagnosis (CAD) systems hold immense potential in reducing errors, op-timizing treatment costs, and saving time in future medical practices (Gale et al., 2017).Nevertheless, to ensure the successful integration of such systems into clinical routines,evaluating not only the overall performance but also the reliability of individual diagnosesis essential. This study aims to address this need by introducing Monte Carlo dropout(MCDO) layers (Kendall and Gal, 2017), a state-of-the-art approximation of Bayesian Neu-ral Networks, as a quality control measure. The incorporation of these layers provides areliable uncertainty score for automated proximal femur fracture classification, followingthe AO-fracture classification system.∗Contributed equally©CC-BY 4.0, M. Lotfy, S. Frenner, M. Beirer, P. Biberthaler & S. Albarqouni.Lotfy Frenner Beirer Biberthaler Albarqouni2. MethodologyHaving a dataset consisting of 1347 X-ray images and their corresponding labels y∈C,where Crepresents one of three subsets in different classification scenarios, namely C∈{C1, C2, C3};C1⊂ {Fracture, Normal }for fracture detection, C2⊂ {A, B, Normal }forthe three-class scenario, and C3⊂ {A1, A2, A3, B1, B2, B3}for the six-class scenario, ourobjective is to develop a model for femur fracture classification that assigns each image aclass label along with uncertainty scores. The database used in this study was collected from672 patients by the trauma surgery department of Klinikum Rechts der Isar. The groundtruth labels were collected by three different experts, and all images were verified by a seniorradiologist. Additional annotations were obtained from three independent experts for thetesting set, with each expert providing two independent readings conducted at differenttimes, considering variations in shifts and lighting conditions. To assess the uncertaintyscores qualitatively, an independent radiologist re-evaluated 30% of the test set.3. Experiments and ResultsThe focus of our work is not to outperform the SOTA, but rather to capture the uncer-tainty while achieving comparable performance. The experiments were designed in a wayto analyze the performance of MCDO under 1) Stochastic (with MCDO) vs. Deterministicnetworks, and 2) different loss functions; namely Cross Entropy (CE), Weighted Cross En-tropy (WCE), and Focal Loss (FL) (Lin et al., 2017), and 3) different network architectures.Implementation. ResNet model was adopted from (Jim ́ enez-S ́ anchez et al., 2018), wherethe MCDO was introduced only at the last dense layer, treating the rest as a deterministicnetwork. As for the stochastic DenseNet model, the dropout layers were introduced at eachconvolutional layer and in the transition blocks, where the same hyper-parameters wereadopted from (Huang et al., 2017). Besides, 5-fold cross-validation was conducted for theDenseNet models. To evaluate the performance of our model, we compute the confusionmatrix, F1-score and the macro average F1-score for each classification scenario to accountfor class imbalance.Results. In general, models introduced with MCDO layers achieved comparable F1-scoreperformance to their own baseline models. In an attempt to analyze and compare the per-formance of MCDO with different settings, namely, deterministic (ResNet) and Stochastic(DenseNet) models against the individual readings of the three experts and the majorityconsensus, we visualize the Receiver operating characteristic (ROC) curves in Fig.2. Over-all, the performance of our stochastic DenseNet models performed similarly compared tothe expert’s readings. Further, the coherence between the uncertainty scores and misclas-sification was qualitatively measured. Results demonstrated that the misclassified imagesmostly occur in the highly uncertain region. This is most apparent in cases of fracturedetection and 3 classes of classification. As for 6 classes classification, it shows a Gaussiandistribution for the uncertainty scores with almost no coherence with misclassification. Thisscenario in particular is the most challenging out of the previous cases and is aligned withthe F1-score reported. This confirms two key outcomes; First, the uncertainty score is areliable measure for detecting mistakes in the model performance and a valid robustnessquality control. Second, the model’s performance is reflected by how well and coherent the2Uncertainty for Proximal Femur Fractures ClassificationImage is clearApp. No Frac. NoExp.1 Exp.2 Exp.3Read1 B B BRead2 B B BGT. A Pred. BOverlapping soft tissueartefacts as disturbing factorApp. Yes Frac. YesExp.1 Exp.2 Exp.3Read1 N B NRead2 N N NGT. B Pred. BImage is taken after operationHealed fracture with sclerotictransformations after screws removalApp. Yes Frac. NoExp.1 Exp.2 Exp.3Read1 B N BRead2 N B BGT. N Pred. BFigure 1: Qualitative Assessment: In the 3-class scenario, the assessment reveals thefollowing (from left to right): Low Uncertainty Misclassified, High UncertaintyCorrectly Classified, and High Uncertainty Misclassified cases.modeling of uncertainty, i.e. ResNet+ vs. DesneNet+ ( cf.Fig.1). Lastly, the coherencebetween the calculated uncertainty scores of the test images and the uncertainty of radi-ologists’ annotation was validated. In a like manner, the aforementioned scores with theinter- and intra-observer variability of the three independent experts who participated inproviding two distinctive reads each were compared. The radiologist was asked to providecomments on the image and the fracture specifying if the classification is an easy or chal-lenging task, along with stating if the difficulty comes from the fracture complexity (i.e.cognitive) or from the appearance (i.e. perceptual). To this end, we expect a reflection ofthe high uncertainty on the appearance and the fracture difficulty, also a reflection on thedisagreement and the inter-/intra- variability among the three experts.ConclusionThis paper aimed to compare the performance of various networks in classifying proximalfemur fractures while incorporating an uncertainty score for quality control. The resultsdemonstrated a strong correlation between misclassification and uncertainty scores, withthe DenseNet stochastic implementation exhibiting the highest alignment with misclassi-Figure 2: Clinical experts vs. Our model. Comparison of different architecture modelsand the clinical experts for the 2, 3 and 6 classes respectively from left to right.3Lotfy Frenner Beirer Biberthaler Albarqounified cases. This implies that a high uncertainty score indicates a higher risk of predictionerrors. Moreover, through qualitative analysis, it was observed that the uncertainty mea-sures closely mirrored the actual uncertainty experienced by radiologists when dealing withchallenging and complex cases, as evident from intra- and inter-expert variability. Thesefindings have important implications both in scientific research and clinical applications. Inresearch, they can be utilized to enhance the training of computer-aided diagnosis (CAD)systems by identifying errors and addressing difficulties, particularly in complex classifica-tions. Additionally, they can serve as a crucial element in facilitating the clinical imple-mentation of deep learning models by providing clinicians with a quantitative measure ofquality for CAD predictions. Future work should focus on enhancing the robustness of themodels and expanding the analysis to different datasets, including other anatomical regionsof the human body.ReferencesWilliam Gale, Luke Oakden-Rayner, Gustavo Carneiro, Andrew P Bradley, and Lyle JPalmer. Detecting hip fractures with radiologist-level performance using deep neuralnetworks. arXiv preprint arXiv:1711.06504 , 2017.Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Denselyconnected convolutional networks. In Proceedings of the IEEE conference on computervision and pattern recognition , pages 4700–4708, 2017.Amelia Jim ́ enez-S ́ anchez, Anees Kazi, Shadi Albarqouni, Sonja Kirchhoff, AlexandraStr ̈ ater, Peter Biberthaler, Diana Mateus, and Nassir Navab. Weakly-supervised local-ization and classification of proximal femur fractures. arXiv preprint arXiv:1809.10692 ,2018.Alex Kendall and Yarin Gal. What uncertainties do we need in bayesian deep learning forcomputer vision? In Advances in neural information processing systems , pages 5574–5584,2017.Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll ́ ar. Focal loss fordense object detection. In Proceedings of the IEEE international conference on computervision , pages 2980–2988, 2017.Robert Lindsey, Aaron Daluiski, Sumit Chopra, Alexander Lachapelle, Michael Mozer,Serge Sicular, Douglas Hanel, Michael Gardner, Anurag Gupta, Robert Hotchkiss, et al.Deep neural network improves fracture detection by clinicians. Proceedings of the NationalAcademy of Sciences , 115(45):11591–11596, 2018.CE Plant, C Hickson, H Hedley, NR Parsons, and ML Costa. Is it time to revisit the aoclassification of fractures of the distal radius? inter-and intra-observer reliability of theao classification. The bone & joint journal , 97(6):818–823, 2015.Jeremy D Schroeder, Sean P Turner, and Emily Buck. Hip fractures: Diagnosis and man-agement. American Family Physician , 106(6):675–683, 2022.4 |