new

Get trending papers in your email inbox!

Subscribe

byAK and the research community

Mar 11

Multimodal Procedural Planning via Dual Text-Image Prompting

Embodied agents have achieved prominent performance in following human instructions to complete tasks. However, the potential of providing instructions informed by texts and images to assist humans in completing tasks remains underexplored. To uncover this capability, we present the multimodal procedural planning (MPP) task, in which models are given a high-level goal and generate plans of paired text-image steps, providing more complementary and informative guidance than unimodal plans. The key challenges of MPP are to ensure the informativeness, temporal coherence,and accuracy of plans across modalities. To tackle this, we propose Text-Image Prompting (TIP), a dual-modality prompting method that jointly leverages zero-shot reasoning ability in large language models (LLMs) and compelling text-to-image generation ability from diffusion-based models. TIP improves the interaction in the dual modalities using Text-to-Image Bridge and Image-to-Text Bridge, allowing LLMs to guide the textual-grounded image plan generation and leveraging the descriptions of image plans to ground the textual plan reversely. To address the lack of relevant datasets, we collect WIKIPLAN and RECIPEPLAN as a testbed for MPP. Our results show compelling human preferences and automatic scores against unimodal and multimodal baselines on WIKIPLAN and RECIPEPLAN in terms of informativeness, temporal coherence, and plan accuracy. Our code and data: https://github.com/YujieLu10/MPP.

A molecular Ferroelectric thin film of imidazolium perchlorate on Silicon

Molecular ferroelectric materials have attracted widespread attention due to their abundant chemical diversity, structural tunability, low synthesis temperature, and high flexibility. Meanwhile, the integration of molecular ferroelectric materials and Si is still challenging, while the fundamental understanding of the ferroelectric switching process is still lacking. Herein, we have successfully synthesized the imidazole perchlorate (ImClO4) single crystals and a series of high-quality highly-oriented thin films on a Si substrate. A high inverse piezoelectric coefficient (55.7 pm/V) is demonstrated for the thin films. Two types of domain bands can be observed (in the size of a few microns): type-I band tilts ~60{\deg} with respect to the horizontal axis, while the type-II band is perpendicular to the horizontal axis. Most of the domain walls (DWs) are 180{\deg} DWs for the two bands, while some 109{\deg} DWs can also be observed. Interestingly, the DWs in type-I band are curved, charged domain walls; while the 180{\deg} DWs in type-II band are straight, noncharged domain walls. After applying +20 V for 5 s through a PFM tip, the 180{\deg} DWs in type-I band shrink first, then disconnect from the band boundary, forming a needle-like domain with a size of ~100 nm. The needle-like domain will extend toward the band boundary after an inverse bias is applied (-20 V), and expand along the band boundary after touching the boundary. Whereas for the type-II domain band, the 180{\deg} DWs are more mobile than the 109{\deg} domain walls, which displaces ~500 nm after applying +20 V. While such displacement is much shorter after the application of a negative bias for the same duration, starting from the positively poled sample. We hope to spur further interest in the on-chip design of the molecular ferroelectrics based electronic devices.

CapS-Adapter: Caption-based MultiModal Adapter in Zero-Shot Classification

Recent advances in vision-language foundational models, such as CLIP, have demonstrated significant strides in zero-shot classification. However, the extensive parameterization of models like CLIP necessitates a resource-intensive fine-tuning process. In response, TIP-Adapter and SuS-X have introduced training-free methods aimed at bolstering the efficacy of downstream tasks. While these approaches incorporate support sets to maintain data distribution consistency between knowledge cache and test sets, they often fall short in terms of generalization on the test set, particularly when faced with test data exhibiting substantial distributional variations. In this work, we present CapS-Adapter, an innovative method that employs a caption-based support set, effectively harnessing both image and caption features to exceed existing state-of-the-art techniques in training-free scenarios. CapS-Adapter adeptly constructs support sets that closely mirror target distributions, utilizing instance-level distribution features extracted from multimodal large models. By leveraging CLIP's single and cross-modal strengths, CapS-Adapter enhances predictive accuracy through the use of multimodal support sets. Our method achieves outstanding zero-shot classification results across 19 benchmark datasets, improving accuracy by 2.19\% over the previous leading method. Our contributions are substantiated through extensive validation on multiple benchmark datasets, demonstrating superior performance and robust generalization capabilities. Our code is made publicly available at https://github.com/WLuLi/CapS-Adapter.