💃Introducing the first LLM-based Motion understanding model: MotionLLM

Community Article Published June 26, 2024

As this is a blog post, we would only want to highlight the key features and insights of the MotionLLM, and discuss its fruitful applications. As a research work, we would like to emphasize something most essential.

Q1: Are previous methods bad enough?

A1: Yes. We plan to answer this question in two parts.

  1. The completive baseline on the video understanding is Video-LLaVA, which is also acknowledged by us as a good work. However, it cannot follow the instructions well in inference. Especially, it tends to describe the environmental information when I ask it something related to human behaviors.

  2. For the motion part, there exists a larger gap toward “good” understanding. The existing largest model (MotionGPT, from Jiang et al.) cannot be scaled to a billion level. Besides, it is not equipped with a good generalization ability, especially reasoning.

Those above motivate us to develop the MotionLLM. Many peers in the motion community discussed with me “Why can motion-related model not be scaled to an x-B level?”. Today, MotionLLM does it.

Q2: Why do the supported modalities are motion and videos?

A2: First, motion data is less redundant without anything bothering it and it is much more private. In this setting, motion data is easier for deep models to compress. However, motion data is less grounded, especially some “non-physical performs” examples. Fortunately, this is what videos can provide. That is our basic motivation.

Q3: What is our research target?

A3: We aim to close 2 loops.

  1. Building a closed-loop of motion→text→motion. If we get a lot of motions (from mocap or videos) and caption them, it is easy to scale up the data. This is quite useful for the text-to-motion.

  2. Human(oid)-in-the-loop comprehension. An agent who can see the world close an action-reaction loop. We believe this to be the foundation of the human(oid)-environment interaction.


We analyze these things when starting the project. I discussed with Shunlin why previous methods failed. We summarize the issues from the following folds.

The most inspiring model, MotionGPT, cannot bridge the gap from motion language thoroughly. Motion is not totally a foreign language. More adaptions are needed.

Data is also subject to blame. There is limited data which is available for describing the human behaviors.

For the first issue, our technical solution is quite quite simple. Motivated by LLaVA, we bridge the modality gap via a linear projection layer. However, different from the Video-LLaVA and LLAVA, motion has a larger modality gap with video. Thus, they do not share the projection layer. Note that both motion and video share the knowledge in the LLM part, where they help with each other. To make full use of “motion-text-video” triple data, we take them into training and find them really jointly prompt better results. This additionally helps us to answer the second question (Q2) above.

For the second issue, we paid tens of thousands of $$USD$$ to annotate the data, including the caption and QAs. I believe in "no dirty work, not perfect result.". We think this will be quite useful to the community.

For results, I am a bit lazy in showing the SOTA result. I want to highlight some applications of MotionLLM.

  • MotionLLM enables basic motion/behavior understanding.

  • MotionLLM is your fitness coach, especially for those disabled.

  • MotionLLM as an event commentator for large events.

  • MotionLLM is an agent to control the humanoid and performs the action.

… …

For details, please visit our homepage (https://lhchen.top/MotionLLM/).

By Ling-Hao Chen and Shunlin Lu. Credits also with other co-authors.

📜 Citation

@article{chen2024motionllm,
  title={MotionLLM: Understanding Human Behaviors from Human Motions and Videos},
  author={Chen, Ling-Hao and Lu, Shunlin and Zeng, Ailing and Zhang, Hao and Wang, Benyou and Zhang, Ruimao and Zhang, Lei},
  journal={arXiv preprint arXiv:2405.20340},
  year={2024}
}