Stratified Avatar Generation from Sparse Observations

CVPR 2024
1 Wuhan University   2 Pennsylvania State University   3 University of Southern California   4 Ant Group  
teaser_img Stratified avatar generation from sparse observations. Given the sensory sparse observation of the body motion: 6-DoF poses of the head and hand marked by RGB axes in (a), our method leverages a disentangled body representation in (b) to reconstruct the upper-body conditioned on the sparse observation in (c), and lower-body conditioned on the upper-body reconstruction in (d) to accomplish the full-body reconstruction in (e).

Video

Abstract

Estimating 3D full-body avatars from AR/VR devices is essential for creating immersive experiences in AR/VR applications. This task is challenging due to the limited input from Head Mounted Devices, which capture only sparse observations from the head and hands. Predicting the full-body avatars, particularly the lower body, from these sparse observations presents significant difficulties. In this paper, we are inspired by the inherent property of the kinematic tree defined in the Skinned Multi-Person Linear (SMPL) model, where the upper body and lower body share only one common ancestor node, bringing the potential of decoupled reconstruction. We propose a stratified approach to decouple the conventional full-body avatar reconstruction pipeline into two stages, with the reconstruction of the upper body first and a subsequent reconstruction of the lower body conditioned on the previous stage. To implement this straightforward idea, we leverage the latent diffusion model as a powerful probabilistic generator, and train it to follow the latent distribution of decoupled motions explored by a VQ-VAE encoder-decoder model. Extensive experiments on AMASS mocap dataset demonstrate our state-of-the-art performance in the reconstruction of full-body motions.

Approach

The proposed SAGE Net mainly contains two components: (a) Disentangled VQ-VAE for discrete human motion latent learning. To facilitate visualization, we incorporate zero rotations as padding for the lower body in the Upper VQ-VAE, and vice versa for the Lower VQ-VAE. Consequently, in the visualizations of the Upper VQ-VAE, the lower body remains in a stationary pose, whereas in the visualizations of the Lower VQ-VAE, the upper body is maintained in a T-pose. (b) The stratified diffusion model, which models the conditional distribution of the latent space for upper and lower motion. This model sequentially infers the upper and lower body latents, capturing the correlation between upper and lower motions. By employing a dedicated full-body decoder on the concatenated upper and lower latents, we can obtain full-body motion.

Approach

Results

Qualitative results on the AMASS Dataset.

Vis

Results

We demonstrate generalization of SAGENet to real VR inputs from Quest2.

Challenging Case

AvatarJLM SAGE Reference

BibTeX

@inproceedings{feng2023sage,
  author    = {Feng, Han and Ma, Wenchao and Gao, Quankai and Zheng, Xianwei and Xue, Nan and Xu, Huijuan},
  title     = {Stratified Avatar Generation from Sparse Observations},
  booktitle   = {CVPR},
  year      = {2024},
}