UniSH: Unifying Scene and Human Reconstruction in a Feed-Forward Pass

Mengfei Li1, Peng Li1, Zheng Zhang2, Jiahao Lu1, Chengfeng Zhao1, Wei Xue1,
Qifeng Liu1, Sida Peng3, Wenxiao Zhang1, Wenhan Luo1, Yuan Liu1†, Yike Guo1†
1The Hong Kong University of Science and Technology
2Beijing University of Posts and Telecommunications, 3Zhejiang University
Corresponding authors.
UniSH Teaser
Given a monocular video as input, our UniSH is capable of jointly reconstructing scene and human in a single forward pass, enabling effective estimation of scene geometry, camera parameters and SMPL parameters.

Abstract

We present UniSH, a unified, feed-forward framework for joint metric-scale 3D scene and human reconstruction. A key challenge in this domain is the scarcity of large-scale, annotated real-world data, forcing a reliance on synthetic datasets. This reliance introduces a significant sim-to-real domain gap, leading to poor generalization, low-fidelity human geometry, and poor alignment on in-the-wild videos.

To address this, we propose an innovative training paradigm that effectively leverages unlabeled in-the-wild data. Our framework bridges strong, disparate priors from scene reconstruction and HMR, and is trained with two core components: (1) a robust distillation strategy to refine human surface details by distilling high-frequency details from an expert depth model, and (2) a two-stage supervision scheme, which first learns coarse localization on synthetic data, then fine-tunes on real data by directly optimizing the geometric correspondence between the SMPL mesh and the human point cloud. This approach enables our feed-forward model to jointly recover high-fidelity scene geometry, human point clouds, camera parameters, and coherent, metric-scale SMPL bodies, all in a single forward pass. Extensive experiments demonstrate that our model achieves state-of-the-art performance on human-centric scene reconstruction and delivers highly competitive results on global human motion estimation, comparing favorably against both optimization-based frameworks and HMR-only methods.

Method

UniSH Framework

The network architecture of UniSH. UniSH takes a monocular video as input. The video frames are processed by the Reconstruction Branch to predict per-frame camera extrinsics E, confidence maps C, and pointmaps P. Camera intrinsics K are derived from the pointmaps. Human crops from the video are fed into the Human Body Branch along with K to estimate global SMPL shape parameters β and per-frame pose parameters θi. Features from both branches are processed by AlignNet to predict the global scene scale s and per-frame SMPL translations ti for coherent scene and human alignment. The indices (e.g., in θ1, θ2, θ3 and t1, t2, t3) denote frame-specific parameters.

Interactive Visualization

Interactive 4D Player.
Left Click to Rotate, Right Click to Pan, Scroll to Zoom.
* Scene point clouds are downsampled for smoother web performance.

Loading 3D Sequence...

Frame: 0

More Visualization Results

BibTeX

@misc{li2026unishunifyingscenehuman,
      title={UniSH: Unifying Scene and Human Reconstruction in a Feed-Forward Pass}, 
      author={Mengfei Li and Peng Li and Zheng Zhang and Jiahao Lu and Chengfeng Zhao and Wei Xue and Qifeng Liu and Sida Peng and Wenxiao Zhang and Wenhan Luo and Yuan Liu and Yike Guo},
      year={2026},
      eprint={2601.01222},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.01222}, 
}