Real-Time Attentive Dilated U-Net for Extremely Dark Image Enhancement

Author:

Huang Junjian1ORCID,Ren Hao1ORCID,Liu Shulin2ORCID,Liu Yong2ORCID,Lv Chuanlu2ORCID,Lu Jiawen2ORCID,Xie Changyong2ORCID,Lu Hong1ORCID

Affiliation:

1. School of Computer Science, Fudan University, Shanghai, China

2. Naval Medical University, Shanghai, China

Abstract

Images taken under low-light conditions suffer from poor visibility, color distortion, and graininess, all of which degrade the image quality and hamper the performance of downstream vision tasks, such as object detection and instance segmentation in the field of autonomous driving, making low-light enhancement an indispensable basic component of high-level visual tasks. Low-light enhancement aims to mitigate these issues, and has garnered extensive attention and research over several decades. The primary challenge in low-light image enhancement arises from the low signal-to-noise ratio caused by insufficient lighting. This challenge becomes even more pronounced in near-zero lux conditions, where noise overwhelms the available image information. Both traditional image signal processing pipeline and conventional low-light image enhancement methods struggle in such scenarios. Recently, deep neural networks have been used to address this challenge. These networks take unmodified RAW images as input and produce the enhanced sRGB images, forming a deep learning based image signal processing pipeline. However, most of these networks are computationally expensive and thus far from practical use. In this article, we propose a lightweight model called attentive dilated U-Net (ADU-Net) to tackle this issue. Our model incorporates several innovative designs, including an asymmetric U-shape architecture, dilated residual modules for feature extraction, and attentive fusion modules for feature fusion. The dilated residual modules provide strong representative capability, whereas the attentive fusion modules effectively leverage low-level texture information and high-level semantic information within the network. Both modules employ a lightweight design but offer significant performance gains. Extensive experiments demonstrate that our method is highly effective, achieving an excellent balance between image quality and computational complexity—that is, taking less than 4ms for a high-definition 4K image on a single GTX 1080Ti GPU and yet maintaining competitive visual quality. Furthermore, our method exhibits pleasing scalability and generalizability, highlighting its potential for widespread applicability.

Funder

Intelligent Enhancement Technology for Night Scene Research Fund

Publisher

Association for Computing Machinery (ACM)

Reference62 articles.

1. THE TRANSFORMATION OF POISSON, BINOMIAL AND NEGATIVE-BINOMIAL DATA

2. Learning to See in the Dark

3. Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. 2022. Simple baselines for image restoration. In Proceedings of the European Conference on Computer Vision. 17–33.

4. Sung-Jin Cho, Seo-Won Ji, Jun-Pyo Hong, Seung-Won Jung, and Sung-Jea Ko. 2021. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the International Conference on Computer Vision. 4641–4650.

5. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3