Geometric Consistency Refinement for Single Image Novel View Synthesis via Test-Time Adaptation of Diffusion Models

Computer Vision Group, Chalmers University of Technology  

CVPRW EDGE25

The single image novel view synthesis task is to, given a reference image and a relative pose, generate an image of the scene from the target pose. The estimated pose for an image generated by the diffusion based method ZeroNVS is shown in red and our refined estimate is depicted in green. The reference pose is shown in blue and the target pose in black. As can be seen, the estimated relative poses from the image generated by the diffusion model can differ significantly from the target pose. Our method refines such images to better align with the target pose.

Abstract

Diffusion models for single image novel view synthesis (NVS) can generate highly realistic and plausible images, but they are limited in the geometric consistency to the given relative poses. The generated images often show significant errors with respect to the epipolar constraints that should be fulfilled, as given by the target pose. In this paper we address this issue by proposing a methodology to improve the geometric correctness of images generated by a diffusion model for single image NVS. We formulate a loss function based on image matching and epipolar constraints, and optimize the starting noise in a diffusion sampling process such that the generated image should both be a realistic image and fulfill geometric constraints derived from the given target pose. Our method does not require training data or fine-tuning of the diffusion models, and we show that we can apply it to multiple state-of-the-art models for single image NVS. The method is evaluated on the MegaScenes dataset and we show that geometric consistency is improved compared to the baseline models while retaining the quality of the generated images.

Our method for geometric consistency refinement (GC-Ref) modifies images such that corresponding points in the reference image and the generated image lie close to their corresponding epipolar lines. We show an example of a reference image with a warping to the target pose, obtained via monocular depth estimation, that the generated image should align with. If we consider matching points between the reference images and the generated images we see that after our refinement the points lie closer to their epipolar lines. This is also shown in the histograms where we show the distributions of the distances between matching points and their corresponding epipolar lines before and after our refinement.

Method Overview

Our method for geometric consistency refinement aims to iteratively refine an image generated by diffusion model for single image view synthesis to better fulfill geometric constraints. Our method is based on the fact that if the images are geometrically consistent given the target pose then all matching points between the reference image and the generated image should lie on the corresponding epipolar lines. We explicitly optimize this criteria by computing matching points between the reference image and generated image via a differentiable matcher and then use the epipolar distances as a loss function to optimize the starting noise of the diffusion process using the gradient.

BibTeX


@inproceedings{gc-ref,
  title={Geometric Consistency Refinement for Single Image Novel View Synthesis via Test-Time Adaptation of Diffusion Models},
  author={Bengtson, Josef and Nilsson, David and Kahl, Fredrik},
  booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  year={2025}
}