GTC Silicon Valley-2019: Extreme View Synthesis: Novel View Generation Under Large Camera Motion
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9576:Extreme View Synthesis: Novel View Generation Under Large Camera Motion
Orazio Gallo(NVIDIA)
We'll discuss a deep-learning approach that takes as an input a few images of a scene and synthesizes new views as seen from virtual cameras. This could be used to generate videos such as camera flyby videos, or simply a view of the scene from a new location. Despite several novel view synthesis approaches, the quality of resulting images quickly degrades when the virtual camera moves significantly with respect to the input images due to increasing depth uncertainty and disocclusions. We'll describe how we cast this problem as one of depth probability estimation for the novel view, image synthesis, and conditional image refinement. We'll also cover traditional and deep learning-based depth estimation, issues with warping-based novel view synthesis methods, and how depth information can be used to refine the quality of synthesized images.