Skip to main content
  1. Papers/

PhysAvatar

PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
#

Yang Zheng, Qingqing Zhao, Guandao Yang, Wang Yifan, Donglai Xiang, Florian Dubost, Dmitry Lagun, Thabo Beeler, Federico Tombari, Leonidas Guibas, Gordon Wetzstein

Splats
SMPL-X
Texture
Relight
Clothing
Deformation
ECCV 2024
y-zheng18/PhysAvatar

Official repo for PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations, ECCV 2024

Jupyter Notebook
138
7

Abstract
#

Modeling and rendering photorealistic avatars is of crucial importance in many applications. Existing methods that build a 3D avatar from visual observations, however, struggle to reconstruct clothed humans. We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human from multi-view video data along with the physical parameters of the fabric of their clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for spatio-temporal mesh tracking as well as a physically based inverse renderer to estimate the intrinsic material properties. PhysAvatar integrates a physics simulator to estimate the physical parameters of the garments using gradientbased optimization in a principled manner. These novel capabilities enable PhysAvatar to create high-quality novel-view renderings of avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data. This marks a significant advancement towards modeling photorealistic digital humans using physically based inverse rendering with physics in the loop.
Paper

Approach
#

Paper teaser
Paper teaser.
Method overview
Method overview.

Results
#

Data
#

Comparisons
#

Papers Published @ 2024 - This article is part of a series.
Part 35: This Article