Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2020, Vol. 41 ›› Issue (6): 922-929.DOI: 10.11996/JG.j.2095-302X.2020060922

Previous Articles     Next Articles

FANET: light field depth estimation with multi-channel information fusion 

  

  1. (School of Computer and Information, Hefei University of Technology, Hefei Anhui 230009, China) 
  • Online:2020-12-31 Published:2021-01-08
  • Supported by:
    Foundation items:General Project of National Natural Science Foundation of China (61876057, 61971177) 

Abstract: Abstract: Compared with the traditional two-dimensional images, the images, generated by the light field camera capturing the spatial and angular information of the scene in only one shot, contain more information and exhibit more advantages in the depth estimation task. In order to obtain high-quality scene depth using light field images, a feature assigning network, of which the structure can efficiently fuse the multi-channel information, was designed for depth estimation based on its multi-angle representation. On the basis of the artificial selection of specific views, convolution kernels of different sizes were utilized to cope with different baseline changes. Meanwhile, a feature fusion module was established based on the multi-input characteristics of light field data, and the double-channel network structure was used to integrate the front and back layer information, boosting the learning efficiency and performance of the network. Experimental results on the new HCI data set show that the network converges faster on the training set and can achieve accurate depth estimation in non-Lambertian scenes, and that the average performance on the MSE indicator is superior to other advanced methods.

Key words: Keywords: light field, depth estimation, convolutional neural network, feature fusion, attention, multi-view 

CLC Number: