欢迎访问《图学学报》 分享到:

图学学报

• 计算机视觉 • 上一篇    下一篇

小场景交互式稠密三维重建系统

  

  1. 1. 中国地质环境监测院,北京 100081;
    2. 南京航空航天大学自动化学院,江苏 南京 210016
  • 出版日期:2019-04-30 发布日期:2019-05-10
  • 基金资助:
    国家高技术研究发展计划(863计划)项目(2012AA011903)

Interactive Dense 3D Reconstruction System of Small Scenes

  1. 1. China Institute of Geological Environment Monitoring, Beijing 100081, China;  2. College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing Jiangsu 210016, China
  • Online:2019-04-30 Published:2019-05-10

摘要: 近年来,随着 GPU 技术的深入发展和并行算法的日益成熟,使得实时三维重建成 为可能。文中实现了一种针对小场景的交互式稠密三维重建系统,此系统借助先进的移动跟踪 技术,可以准确地估计相机的即时位置。提出了一种改进的多视深度生成算法,在 GPU 加速下 能够实时计算场景的深度。改进算法中的亚像素级的半全局匹配代价累积提高了多视立体匹配 的精度,并结合全局优化的方法计算出了准确的场景深度信息。深度图被转换为距离场,使用 全局优化的直方图压缩融合算法和并行的原始对偶算法实现了深度的实时融合。实验结果证明 了重建系统的可行性和重建算法的正确性。

关键词: 重建, 实时, 并行算法, 深度图, 融合

Abstract: In recent years, it is possible to perform real-time 3D reconstruction with the development of the GPU technology and maturity of parallel algorithms. This paper presents an interactive dense 3D reconstruction system for small scenes, which can accurately estimate the real-time position of the camera by means of advanced mobile tracking technology. An improved multi-view depth generation algorithm is proposed, which can calculate scene depth in real-time under GPU acceleration. The cumulative cost of sub-pixel semi-global matching in the improved algorithm improves the accuracy of multi-view stereo matching, and combines the global optimization method to calculate the accurate scene depth information. Depth map is converted into distance field, and real-time depth fusion is realized by using globally optimized histogram compression fusion algorithm and parallel primal-dual algorithm. The experimental results prove the feasibility of the reconstruction system and the correctness of the reconstruction algorithm.

Key words: reconstruction, real-time, parallel algorithms, depth, fusion