CLC number:
On-line Access: 2022-12-06
Received: 2022-08-11
Revision Accepted: 2022-11-28
Crosschecked: 0000-00-00
Cited: 0
Clicked: 271
Zhixiong HUANG, Jinjiang LI, Zhen HUA, Linwei FAN. Filter-cluster attention based recursive network for low-light enhancement[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Filter-cluster attention based recursive network for low-light enhancement",
author="Zhixiong HUANG, Jinjiang LI, Zhen HUA, Linwei FAN",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200344"
}
%0 Journal Article
%T Filter-cluster attention based recursive network for low-light enhancement
%A Zhixiong HUANG
%A Jinjiang LI
%A Zhen HUA
%A Linwei FAN
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200344
TY - JOUR
T1 - Filter-cluster attention based recursive network for low-light enhancement
A1 - Zhixiong HUANG
A1 - Jinjiang LI
A1 - Zhen HUA
A1 - Linwei FAN
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200344
Abstract: The poor quality of images recorded in low-light environments affects their further relevant applications. To improve the visibility of low-light images, this paper proposes a recurrent network based on filter-cluster attention (FCA), where the main body consists of three units: difference concern, gate recurrent, and iterative residual. The network performs multi-stage recursive learning on low-light images, and then extracts deeper feature information. To compute more accurate dependence, we designs a novel FCA that focuses on the saliency of feature channels. FCA and self-attention are used to highlight the low-light regions and important channels of the feature. We also designed a dense connection pyramid (DenCP) to extract the color features of the low-light inversion image, to compensate for loss of the image’s color information. The experimental results on six public datasets show that our method has outstanding performance in subjective and quantitative comparisons.
Open peer comments: Debate/Discuss/Question/Opinion
<1>