Welcome to Journal of Graphics share: 

Journal of Graphics ›› 2025, Vol. 46 ›› Issue (6): 1304-1315.DOI: 10.11996/JG.j.2095-302X.2025061304

• Image Processing and Computer Vision • Previous Articles     Next Articles

Lightweight blind super-resolution network based on degradation separation

FAN Lexiang(), MA Ji, ZHOU Dengwen()   

  1. School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China
  • Received:2025-03-04 Accepted:2025-06-09 Online:2025-12-30 Published:2025-12-27
  • Contact: ZHOU Dengwen
  • About author:First author contact:

    FAN Lexiang (2001-), master student. Her main research interests cover computer vision and image super-resolution. E-mail:120232227379@ncepu.edu.cn

Abstract:

The blind image super-resolution (SR) problem is concerned with recovering high-resolution (HR) images from low-resolution (LR) images with unknown degradation patterns. Currently, most existing methods primarily use explicit modeling to estimate blur kernels for characterizing image degradation processes. However, real-world image degradation is complex and diverse, making explicit modeling unable to fully cover multiple degradation types. Although implicit modeling is more effective in handling complex degradations, its model structures are often complicated with huge parameter sizes, leading to high computational costs and poor model stability. To address these issues, a lightweight blind SR reconstruction method named BDSSR was proposed, achieving efficient reconstruction through an implicit learning mechanism. The core framework of BDSSR consisted of a degradation factor eliminator (DFE) and a feature-fusion SR (FFSR) network. The DFE separated images with complex degradations into a clear LR image containing only bicubic down-sampling and non-bicubic degradation features such as noise and blur. Specifically, the clear LR image was provided as high-quality input for the SR process, reducing noise and blur interferences; the separated degradation features were fused into the SR network through feature-modulation coefficients to adaptively adjust the network weights, guiding the model to focus on the fine-grained reconstruction of high-frequency details. The FFSR further employed a multi-scale convolution strategy to enhance the capture capability of image content through efficient fusion of cross-scale features, thereby generating rich and realistic details and enabling robust modeling of complex degradations within a lightweight architecture. Experimental results demonstrated that BDSSR exhibited superior performance on multiple standard datasets. Taking the Urban100 dataset as an example, at ×2 and ×4 magnification factors, BDSSR improved the PSNR values by 0.97 dB and 0.47 dB, respectively, compared to DASR, with SSIM values increased by 0.012 2 and 0.015 8. Additionally, its parameter count was only 1.7 M, approximately 3/10 of that of DASR. This method provided a new theoretical perspective, and broad application prospects in practical scenarios were demonstrated, contributing novel ideas and tools to the development of blind super-resolution technology.

Key words: blind super-resolution, degradation factor elimination, feature fusion, bicubic downsampling, deep learning

CLC Number: