Skip to main content
  1. Papers/

SHERF

SHERF: Generalizable Human NeRF from a Single Image
#

Jiteng Mu, Shen Sang, Nuno Vasconcelos, Xiaolong Wang

NeRF
SMPL
Generalized
Monocular
ICCV 2023
skhu101/SHERF

Code for our ICCV'2023 paper “SHERF: Generalizable Human NeRF from a Single Image”

Python
304
10

Abstract
#

Existing Human NeRF methods for reconstructing 3D humans typically rely on multiple 2D images from multi-view cameras or monocular videos captured from fixed camera views. However, in real-world scenarios, human images are often captured from random camera angles, presenting challenges for high-quality 3D human reconstruction. In this paper, we propose SHERF, the first generalizable Human NeRF model for recovering animatable 3D humans from a single input image. SHERF extracts and encodes 3D human representations in canonical space, enabling rendering and animation from free views and poses. To achieve high-fidelity novel view and pose synthesis, the encoded 3D human representations should capture both global appearance and local fine-grained textures. To this end, we propose a bank of 3D-aware hierarchical features, including global, point-level, and pixel-aligned features, to facilitate informative encoding.Global features enhance the information extracted from the single input image and complement the information missing from the partial 2D observation. Point-level features provide strong clues of 3D human structure, while pixel-aligned features preserve more fine-grained details. To effectively integrate the 3D-aware hierarchical feature bank, we design a feature fusion transformer. Extensive experiments on THuman, RenderPeople, ZJU_MoCap, and HuMMan datasets demonstrate that SHERF achieves state-of-the-art performance, with better generalizability for novel view and pose synthesis
Paper

Approach
#

SHERF overview
SHERF overview.

Results
#

Data
#

Comparisons
#

Papers Published @ 2023 - This article is part of a series.
Part 18: This Article