Depth-guided Disocclusion Inpainting for Novel View Synthesis

Authors: 
Allan Hanbury
Allan Hanbury
Allan Hanbury
Allan Hanbury
Type: 
Speech with proceedings
Proceedings: 
Proceedings of the OAGM & ARW Joint Workshop Vision, Automation and Robotics
Publisher: 
Verlag der Technischen Universität Graz
Pages: 
160 - 164
Year: 
2017
ISBN: 
ISBN: 978-3-85125-524-9
Abstract: 
The generation of novel views is a crucial processing step in 3D content generation, since it gives control over the amount of depth impression on (auto-)stereoscopic devices and enables free-viewpoint video viewing. A critical problem in novel view generation is the occurrence of disocclusions caused by a change in the viewing direction. Thus, areas in the novel views may become visible that were either covered by foreground objects or were located outside the borders in the original views. In this paper, we propose a depth-guided inpainting approach which relies on efficient patch matching to complete disocclusions along foreground objects and close to the image borders. Our method adapts its patch sizes depending on the disocclusion sizes and incorporates the depth information by focusing on the background scene content for patch selection. A subjective evaluation based on a user study demonstrates the effectiveness of the proposed approach in terms of quality of the 3D viewing experience.
TU Focus: 
Information and Communication Technology
Reference: 

T. Rittler, M. Nezveda, F. Seitner, M. Gelautz:
"Depth-guided Disocclusion Inpainting for Novel View Synthesis";
Vortrag: OAGM & ARW Joint Workshop 2017, Wien; 10.05.2017 - 12.05.2017; in: "Proceedings of the OAGM & ARW Joint Workshop Vision, Automation and Robotics", Verlag der Technischen Universität Graz, (2017), ISBN: 978-3-85125-524-9; S. 160 - 164.

Zusätzliche Informationen

Last changed: 
16.02.2018 09:55:59
Accepted: 
Accepted
TU Id: 
259941
Invited: 
Department Focus: 
Media Informatics and Visual Computing
Author List: 
T. Rittler, M. Nezveda, F. Seitner, M. Gelautz
Abstract German: