Nanyang Technological University1 ByteDance2 National University of Singapore3
*Part of the work is done during an internship at ByteDance.
Despite recent progress, video generative models still struggle to animate static images into videos that portray delicate human actions, particularly when handling uncommon or novel actions whose training data are limited. In this paper, we explore the task of learning to animate images to portray delicate human actions using a small number of videos -- 16 or fewer -- which is highly valuable for real-world applications like video and movie production. Learning generalizable motion patterns that smoothly transition from user-provided reference images in a few-shot setting is highly challenging. We propose FLASH (Few-shot Learning to Animate and Steer Humans), which learns generalizable motion patterns by forcing the model to reconstruct a video using the motion features and cross-frame correspondences of another video with the same motion but different appearance. This encourages transferable motion learning and mitigates overfitting to limited training data. Additionally, FLASH extends the decoder with additional layers to propagate details from the reference image to generated frames, improving transition smoothness. Human judges overwhelmingly favor FLASH, with 65.78% of 488 responses prefer FLASH over baselines.
@article{li2025learning, title={Learning to Animate Images from A Few Videos to Portray Delicate Human Actions}, author={Li, Haoxin and Yu, Yingchen and Wu, Qilong and Zhang, Hanwang and Bai, Song and Li, Boyang}, journal={arXiv preprint arXiv:2503.00276}, year={2025} }