Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2025, Vol. 46 ›› Issue (6): 1267-1273.DOI: 10.11996/JG.j.2095-302X.2025061267

• Image Processing and Computer Vision • Previous Articles     Next Articles

Frequency-aware hypergraph fusion for event-based semantic segmentation

YU Nannan1,2(), MENG Zhengyu1,2, FANG Youjiang1,2, SUN Chuanyu1,2, YIN Xuefeng1,2, ZHANG Qiang1,2, WEI Xiaopeng1,2, YANG Xin1,2()   

  1. 1 School of Computer Science and Technology, Dalian University of Technology, Dalian Liaoning 116024, China
    2 Key Laboratory of Social Computing and Cognitive Intelligence of Ministry of Education, Dalian Liaoning 116024, China
  • Received:2024-10-09 Accepted:2025-04-16 Online:2025-12-30 Published:2025-12-27
  • Contact: YANG Xin
  • About author:First author contact:

    YU Nannan (1993-), PhD candidate. Her main research interest covers event-based computer vision. E-mail:12009059@mail.dlut.edu.cn

  • Supported by:
    Science and Technology Innovation 2030 - “New Generation Artificial Intelligence” Major Project(2021ZD12400)

Abstract:

Semantic segmentation, a core task for autonomous driving perception, faces challenges under low-light and high-speed scenarios due to the limitations of conventional cameras. Event cameras, with their microsecond temporal resolution and high dynamic range, effectively mitigate motion blur and extreme lighting conditions. However, their asynchronous sparse event data lacks texture and color information, while uneven event distributions caused by relative motion between background and objects pose significant difficulties for semantic feature extraction. To address these issues, a multi-frequency hypergraph fusion method for event-based semantic segmentation was proposed. First, the approach decomposed event frames into multi-scale spatiotemporal features through a frequency separation module, distinguishing high-frequency motion edges from low-frequency structural information. A dynamic hypergraph construction algorithm then mapped these multi-frequency features into hypergraph nodes, utilizing hypergraph convolution to capture long-range dependencies across frequencies. Finally, an attention mechanism adaptively fused multi-frequency features to enhance inter-class discriminability. Experiments on Carla-Semantic and DDD17-Semantic datasets demonstrated that this method achieved 88.21% MPA and 82.68% mIoU, outperforming existing methods and validating the effectiveness of the multi-frequency hypergraph model for event-based semantic understanding. This research provided a novel solution for robust environment perception with event cameras, particularly suited to challenging autonomous driving scenarios involving low-light conditions and rapid motion.

Key words: semantic segmentation, hypergraph, attention mechanism, multi-frequency fusion, event camera

CLC Number: