Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2021, Vol. 42 ›› Issue (3): 446-453.DOI: 10.11996/JG.j.2095-302X.2021030446

• Image Processing and Computer Vision • Previous Articles     Next Articles

Research on depth prediction algorithm based on multi-task model 

  

  1. School of Computer Science and Technology, Dalian University of Technology, Dalian Liaoning 116024, China
  • Online:2021-06-30 Published:2021-06-29
  • Supported by:
    National Natural Science Foundation of China (91748104, 61172007, 61632006, U1811463, U1908214, 61751203); National Key Research and Development Program (2018AAA0102003) 

Abstract: Image depth prediction is a hot research topic in the field of computer vision and robotics. The construction of depth image is an important prerequisite for 3D reconstruction. Traditional methods mainly conduct manual annotation based on the depth of a fixed point, or predict the depth based on binocular positioning according to the position of the camera. However, such methods are time-consuming and labor-intensive and restricted by factors such as camera position, positioning method, and distribution probability. As a result, the difficulty in guaranteeing high accuracy poses a challenge to subsequent tasks following the predicted depth map, such as 3D reconstruction. This problem can be effectively solved by introducing a deep learning method based on multi-task modules. For scene images, a multi-task model-based monocular-image depth-prediction network was proposed, which can simultaneously train and learn three tasks of depth prediction, semantic segmentation, and surface vector estimation. The network includes a common feature extraction module and a multi-task feature fusion module, which can ensure the independence of each feature while extracting common features, and guarantee the accuracy of depth prediction while improving the structure of each task. 

Key words: computer vision, monocular depth prediction, multi-task model, semantic segmentation, surface normal estimation 

CLC Number: