Below are introduction videos generated by our method, where synthesized avatars present and explain our work. The driving audio is generated by IndexTTS2. Click the tabs below to switch between examples.
Existing talking avatar methods typically adopt an image-to-video pipeline conditioned on a static reference image within the same scene as the target generation. This restricted, single-view perspective lacks sufficient temporal and expression cues, limiting the ability to synthesize high-fidelity talking avatars in customized backgrounds. To this end, we introduce Talking Avatar generation from Video Reference (TAVR), a novel framework that shifts the paradigm by leveraging cross-scene video inputs. To effectively process these extended temporal contexts and bridge cross-scene domain gaps, TAVR integrates a token selection module alongside a comprehensive three-stage training scheme. Specifically, same-scene video pretraining establishes foundational appearance copying, which is subsequently expanded by cross-scene reference fine-tuning for robust cross-scene adaptation. Finally, task-specific reinforcement learning aligns the generated outputs with human-centric rewards to maximize identity similarity. To systematically evaluate cross-scene robustness, we construct a new benchmark comprising 158 carefully curated cross-scene video pairs. Extensive experiments show that TAVR benefits from flexible inference-time video referencing and consistently surpasses existing baselines both quantitatively and qualitatively.
Overview of TAVR framework. Our framework generates high-fidelity talking avatars with customized backgrounds by integrating cross-scene video references. Visual inputs, including the video reference and masked target background, are encoded by the VAE into latents, followed by a Token Selection module to reduce computational redundancy. These tokens, alongside an optional motion latent for longer video synthesis, are concatenated with the noisy target latent and forwarded through adapted Transformer blocks. Within each block, a Reference Self-Attention module extends standard self-attention to jointly process target and reference features. Subsequently, two cross-attention modules inject guidance from the text prompt and audio signals. Notably, the audio module incorporates the corresponding reference audio to inject explicit audio-visual clues into the reference stream, guiding the network to accurately locate and extract the intrinsic speaking dynamics from the reference tokens.
We compare TAVR against state-of-the-art talking avatar generation methods: StableAvatar, EchoMimicV3, OmniAvatar, HuMo, and LongCat-Video-Avatar. As most baselines are designed for same-scene image referencing, we adapt them for cross-scene evaluation using two distinct testing protocols.
The raw, unedited cross-scene reference image is provided directly to each baseline as the identity reference. Our method (TAVR) takes a reference video instead.
The cross-scene reference image is first contextually adapted to the target scene using Qwen-Image-Edit, and the edited image is subsequently fed to each baseline.
Additional qualitative results on our benchmark demonstrating generalization across diverse identities, scenes, and motion patterns.