Full Text:   <10646>

Summary:  <588>

CLC number: TP391

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2021-03-28

Cited: 0

Clicked: 6012

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Dan ZHANG

https://orcid.org/0000-0002-5033-8128

Lei ZHAO

https://orcid.org/0000-0003-4791-454X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2022 Vol.23 No.2 P.220-233

http://doi.org/10.1631/FITEE.2000353


Dual-constraint burst image denoising method


Author(s):  Dan ZHANG, Lei ZHAO, Duanqing XU, Dongming LU

Affiliation(s):  Network and Media Laboratory, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China

Corresponding email(s):   cszhd@zju.edu.cn, cszhl@zju.edu.cn, xdq@zju.edu.cn, ldm@zju.edu.cn

Key Words:  Image denoising, Burst image denoising, Deep learning


Dan ZHANG, Lei ZHAO, Duanqing XU, Dongming LU. Dual-constraint burst image denoising method[J]. Frontiers of Information Technology & Electronic Engineering, 2022, 23(2): 220-233.

@article{title="Dual-constraint burst image denoising method",
author="Dan ZHANG, Lei ZHAO, Duanqing XU, Dongming LU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="23",
number="2",
pages="220-233",
year="2022",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2000353"
}

%0 Journal Article
%T Dual-constraint burst image denoising method
%A Dan ZHANG
%A Lei ZHAO
%A Duanqing XU
%A Dongming LU
%J Frontiers of Information Technology & Electronic Engineering
%V 23
%N 2
%P 220-233
%@ 2095-9184
%D 2022
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2000353

TY - JOUR
T1 - Dual-constraint burst image denoising method
A1 - Dan ZHANG
A1 - Lei ZHAO
A1 - Duanqing XU
A1 - Dongming LU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 23
IS - 2
SP - 220
EP - 233
%@ 2095-9184
Y1 - 2022
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2000353


Abstract: 
deep learning has proven to be an effective mechanism for computer vision tasks, especially for image denoising and burst image denoising. In this paper, we focus on solving the burst image denoising problem and aim to generate a single clean image from a burst of noisy images. We propose to combine the power of block matching and 3D filtering (BM3D) and a convolutional neural network (CNN) for burst image denoising. In particular, we design a CNN with a divide-and-conquer strategy. First, we employ BM3D to preprocess the noisy burst images. Then, the preprocessed images and noisy images are fed separately into two parallel CNN branches. The two branches produce somewhat different results. Finally, we use a light CNN block to combine the two outputs. In addition, we improve the performance by optimizing the two branches using two different constraints: a signal constraint and a noise constraint. One maps a clean signal, and the other maps the noise distribution. In addition, we adopt block matching in the network to avoid frame misalignment. Experimental results on synthetic and real noisy images show that our algorithm is competitive with other algorithms.

基于双重约束的多帧图像降噪方法

张丹,赵磊,许端清,鲁东明
浙江大学计算机科学与技术学院网络与媒体实验室,中国杭州市,310027
摘要:深度学习在计算机视觉领域应用非常成功,促进了图像降噪和多帧图像降噪领域的快速发展。本文针对多帧图像降噪问题,提出一种从多帧噪声图像中恢复清晰图像的方法。该方法结合BM3D(块匹配和三维滤波,block-matching and 3D filtering)算法和卷积神经网络(CNN)模型完成多帧图像降噪任务。该CNN模型基于分治法的思想设计。首先,用BM3D算法处理带噪声的多帧图像。然后,将预处理后的图像和原始噪声图像分别输入CNN模型的两个并行分支。最后,用一个轻量级CNN模块融合两个分支的输出得到最终图像估计。与以往研究不同,我们对CNN中两个并行分支分配了不同约束函数--信号约束和噪声约束,以提升模型提取不同特征的能力。此外,引入图像块匹配策略解决帧不对齐问题。在合成和真实噪声图像上的实验结果表明,该算法与其他算法相比具有一定竞争力。关键词:图像降噪;多帧图像降噪;深度学习

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aharon M, Elad M, Bruckstein A, 2006. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process, 54(11):4311-4322. doi: 10.1109/TSP.2006.881199

[2]Ahn B, Cho NI, 2017. Block-matching convolutional neural network for image denoising. https://arxiv.org/abs/1704.00524

[3]Buades A, Coll B, Morel JM, 2005. A non-local algorithm for image denoising. IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.60-65. doi: 10.1109/CVPR.2005.38

[4]Burger HC, Schuler CJ, Harmeling S, 2012. Image denoising: can plain neural networks compete with BM3D? IEEE Conf on Computer Vision and Pattern Recognition, p.2392-2399. doi: 10.1109/CVPR.2012.6247952

[5]Chambolle A, 2004. An algorithm for total variation minimization and applications. J Math Imag Vis, 20(1-2):89-97. doi: 10.1023/B:JMIV.0000011325.36760.1e

[6]Dabov K, Foi A, Katkovnik V, et al., 2007. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans Image Process, 16(8):2080-2095. doi: 10.1109/TIP.2007.901238

[7]Divakar N, Babu RV, 2017. Image denoising via CNNs: an adversarial approach. Proc IEEE Conf on Computer Vision and Pattern Recognition Workshops, p.1076-1083. doi: 10.1109/CVPRW.2017.145

[8]Godard C, Matzen K, Uyttendaele M, 2018. Deep burst denoising. Proc European Conf on Computer Vision, p.560-577. doi: 10.1007/978-3-030-01267-0_33

[9]Krull A, Buchholz TO, Jug F, 2019. Noise2Void—learning denoising from single noisy images. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.2124-2132. doi: 10.1109/CVPR.2019.00223

[10]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324. doi: 10.1109/5.726791

[11]Lehtinen J, Munkberg J, Hasselgren J, et al., 2018. Noise2Noise: learning image restoration without clean data. https://arxiv.org/abs/1803.04189

[12]Lempitsky V, Vedaldi A, Ulyanov D, 2018. Deep image prior. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9446-9454. doi: 10.1109/CVPR.2018.00984

[13]Liu ZW, Yuan L, Tang XO, et al., 2014. Fast burst images denoising. ACM Trans Graph, 33(6):Article 232. doi: 10.1145/2661229.2661277

[14]Mao XJ, Shen CH, Yang YB, 2016. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. https://arxiv.org/abs/1603.09056v2

[15]Mildenhall B, Barron JT, Chen JW, et al., 2018. Burst denoising with kernel prediction networks. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.2502-2510. doi: 10.1109/CVPR.2018.00265

[16]Mosseri I, Zontak M, Irani M, 2013. Combining the power of internal and external denoising. IEEE Int Conf on Computational Photography, p.1-9. doi: 10.1109/ICCPhot.2013.6528298

[17]Perona P, Malik J, 1990. Scale-space and edge detection using anisotropic diffusion. IEEE Trans Patt Anal Mach Intell, 12(7):629-639. doi: 10.1109/34.56205

[18]Simonyan K, Zisserman A, 2014. Very deep convolutional networks for large-scale image recognition. https://arxiv.org/abs/1409.1556v4

[19]Tassano M, Delon J, Veit T, 2019. DVDNET: a fast network for deep video denoising. IEEE Int Conf on Image Processing, p.1805-1809. doi: 10.1109/ICIP.2019.8803136

[20]Tomasi C, Manduchi R, 1998. Bilateral filtering for gray and color images. Sixth Int Conf on Computer Vision, p.839-846. doi: 10.1109/ICCV.1998.710815

[21]Vincent P, Larochelle H, Lajoie I, et al., 2010. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res, 11:3371-3408.

[22]Xu J, Zhang L, Zuo WM, et al., 2015. Patch group based nonlocal self-similarity prior learning for image denoising. Proc IEEE Int Conf on Computer Vision, p.244-252. doi: 10.1109/ICCV.2015.36

[23]Yang D, Sun J, 2018. BM3D-Net: a convolutional neural network for transform-domain collaborative filtering. IEEE Signal Process Lett, 25(1):55-59. doi: 10.1109/LSP.2017.2768660

[24]Zhang K, Zuo WM, Chen YJ, et al., 2017. Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process, 26(7):3142-3155. doi: 10.1109/TIP.2017.2662206

[25]Zhang K, Zuo WM, Zhang L, 2018. FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans Image Process, 27(9):4608-4622. doi: 10.1109/TIP.2018.2839891

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE