Full Text:   <1294>

Suppl. Mater.: 

CLC number: TP39

On-line Access: 2023-12-04

Received: 2022-11-02

Revision Accepted: 2023-12-05

Crosschecked: 2023-04-24

Cited: 0

Clicked: 485

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Nandhini CHOCKALINGAM

https://orcid.org/0000-0003-4767-9682

Brindha MURUGAN

https://orcid.org/0000-0002-3952-0674

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.11 P.1601-1615

http://doi.org/10.1631/FITEE.2200534


Amultimodal dense convolution network for blind image quality assessment


Author(s):  Nandhini CHOCKALINGAM, Brindha MURUGAN

Affiliation(s):  Department of Computer Science and Engineering, National Institute of Technology, Tiruchirappalli 620015, India

Corresponding email(s):   cn.nandhini@gmail.com, brindham@nitt.edu

Key Words:  No-reference image quality assessment (NR-IQA), Blind image quality assessment, Multimodal dense convolution network (MDSC-Net), Deep learning, Visual quality, Perceptual quality


Nandhini CHOCKALINGAM, Brindha MURUGAN. Amultimodal dense convolution network for blind image quality assessment[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(11): 1601-1615.

@article{title="Amultimodal dense convolution network for blind image quality assessment",
author="Nandhini CHOCKALINGAM, Brindha MURUGAN",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="11",
pages="1601-1615",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200534"
}

%0 Journal Article
%T Amultimodal dense convolution network for blind image quality assessment
%A Nandhini CHOCKALINGAM
%A Brindha MURUGAN
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 11
%P 1601-1615
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200534

TY - JOUR
T1 - Amultimodal dense convolution network for blind image quality assessment
A1 - Nandhini CHOCKALINGAM
A1 - Brindha MURUGAN
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 11
SP - 1601
EP - 1615
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200534


Abstract: 
Technological advancements continue to expand the communications industry’s potential. Images, which are an important component in strengthening communication, are widely available. Therefore, image quality assessment (IQA) is critical in improving content delivered to end users. Convolutional neural networks (CNNs) used in IQA face two common challenges. One issue is that these methods fail to provide the best representation of the image. The other issue is that the models have a large number of parameters, which easily leads to overfitting. To address these issues, the dense convolution network (DSC-Net), a deep learning model with fewer parameters, is proposed for no-reference image quality assessment (NR-IQA). Moreover, it is obvious that the use of multimodal data for deep learning has improved the performance of applications. As a result, multimodal dense convolution network (MDSC-Net) fuses the texture features extracted using the gray-level co-occurrence matrix (GLCM) method and spatial features extracted using DSC-Net and predicts the image quality. The performance of the proposed framework on the benchmark synthetic datasets LIVE, TID2013, and KADID-10k demonstrates that the MDSC-Net approach achieves good performance over state-of-the-art methods for the NR-IQA task.

一种针对盲图像质量评估的多模态密集卷积网络

Nandhini CHOCKALINGAM, Brindha MURUGAN
国立技术学院计算机科学与工程系,印度提鲁奇拉帕利,620015
摘要:科技进步不断扩大通信行业的潜力。图像在加强交流中发挥着重要作用,已被广泛应用。因此,图像质量评估(IQA)对优化传递给终端用户的内容至关重要。在IQA中使用卷积神经网络面临两个常见难题。一是这些方法难以提供图像最佳表示,另一个问题是模型具有大量参数,容易导致过拟合。为解决这些问题,提出一种参数更少的深度学习模型--密集卷积网络(DSC-Net),用于无参考图像质量评估(NR-IQA)。此外,将多模态数据用于深度学习明显改进各种应用的性能。多模态密集卷积网络(MDSC-Net)融合了灰度共生矩阵(GLCM)方法提取的纹理特征和DSC-Net方法提取的空间特征,并对图像质量进行预测。所提框架在基准合成数据集LIVE、TID2013和KADID-10k的性能表明,MDSC-Net方法在NR-IQA任务中表现出良好性能,超过了当前最先进的方法。

关键词:无参考图像质量评估;盲图像质量评估;多模态密集卷积网络;深度学习;视觉效果;感知质量

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bianco S, Celona L, Napoletano P, et al., 2018. On the use of deep learning for blind image quality assessment. Signal, Image Video Process, 12(2):355-362.

[2]Bosse S, Maniry D, Wiegand T, et al., 2016. A deep neural network for image quality assessment. Proc IEEE Int Conf on Image Processing, p.3773-3777.

[3]Bosse S, Maniry D, Müller KR, et al., 2018. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans Image Process, 27(1):206-219.

[4]Cheng ZX, Takeuchi M, Katto J, 2017. A pre-saliency map based blind image quality assessment via convolutional neural networks. Proc IEEE Int Symp on Multimedia, p.77-82.

[5]Deng J, Dong W, Socher R, et al., 2009. Image-Net: a large-scale hierarchical image database. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.248-255.

[6]Ding GG, Chen WS, Zhao SC, et al., 2018. Real-time scalable visual tracking via quadrangle kernelized correlation filters. IEEE Trans Intell Transp Syst, 19(1):140-150.

[7]Ding GG, Guo YC, Chen K, et al., 2019. Decode: deep confidence network for robust image classification. IEEE Trans Image Process, 28(8):3752-3765.

[8]Gu K, Tao DC, Qiao JF, et al., 2018. Learning a no-reference quality assessment model of enhanced images with big data. IEEE Trans Neural Netw learn Syst, 29(4):1301-1313.

[9]Gu K, Xia ZF, Qiao JF, et al., 2020. Deep dual-channel neural network for image-based smoke detection. IEEE Trans Multimedia, 22(2):311-323.

[10]Gu K, Zhang YH, Qiao JF, 2021a. Ensemble meta-learning for few-shot soot density recognition. IEEE Trans Industr Inform, 17(3):2261-2270.

[11]Gu K, Liu HY, Xia ZF, et al., 2021b. PM2.5 monitoring: use information abundance measurement and wide and deep learning. IEEE Trans Neural Netw Learn Syst, 32(10):4278-4290.

[12]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.

[13]Huang G, Liu Z, Van Der Maaten L, et al., 2017. Densely connected convolutional networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4700-4708.

[14]Kang L, Ye P, Li Y, et al., 2014. Convolutional neural networks for no-reference image quality assessment. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1733-1740.

[15]Kang L, Ye P, Li Y, et al., 2015. Simultaneous estimation of image quality and distortion via multi-task convolutional neural networks. Proc IEEE Int Conf on Image Processing, p.2791-2795.

[16]Kim J, Lee S, 2017. Fully deep blind image quality predictor. IEEE J Sel Top Signal Process, 11(1):206-220.

[17]Krizhevsky A, Sutskever I, Hinton GE, 2012. Imagenet classification with deep convolutional neural networks. Proc 25th Int Conf on Neural Information Processing Systems, p.1097-1105.

[18]Li QH, Lin WS, Xu JT, et al., 2016. Blind image quality assessment using statistical structural and luminance features. IEEE Trans Multimedia, 18(12):2457-2469.

[19]Li ZC, Tang JH, Mei T, 2019. Deep collaborative embedding for social image understanding. IEEE Trans Pattern Anal Mach Intell, 41(9):2070-2083.

[20]Lin HH, Hosu V, Saupe D, 2019. Kadid-10k: a large-scale artificially distorted iqa data-base. Proc 11th Int Conf on Quality of Multimedia Experience, p.1-3.

[21]Lin TY, RoyChowdhury A, Maji S, 2015. Bilinear CNN models for fine-grained visual recognition. Proc IEEE Int Conf on Computer Vision, p.1449-1457.

[22]Liu LX, Liu B, Huang H, et al., 2014. No-reference image quality assessment based on spatial and spectral entropies. Signal Process: Image Commun, 29(8):856-863.

[23]Liu XL, Van De Weijer J, Bagdanov AD, 2017. RankIQA: learning from rankings for no-reference image quality assessment. Proc IEEE Int Conf on Computer Vision, p.1040-1049.

[24]Lu ZK, Lin W, Yang X, et al., 2005. Modeling visual attention’s modulatory aftereffects on visual sensitivity and quality evaluation. IEEE Trans Image Process, 14(11):1928-1942.

[25]Ma JP, Wu JJ, Li LD, et al., 2021. Blind image quality assessment with active inference. IEEE Trans Image Process, 30:3650-3663.

[26]Ma KD, Liu WT, Zhang K, et al., 2018. End-to-end blind image quality assessment using deep neural networks. IEEE Trans Image Process, 27(3):1202-1213.

[27]Mittal A, Moorthy AK, Bovik AC, 2012. No-reference image quality assessment in the spatial domain. IEEE Trans Image Process, 21(12):4695-4708.

[28]Moorthy AK, Bovik AC, 2011. Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans Image Process, 20(12):3350-3364.

[29]Nandhini C, Brindha M, 2023. Hierarchical patch selection: an improved patch sampling for no reference image quality assessment. IEEE Trans Artif Intell, in press.

[30]Pan ZQ, Yuan F, Lei JJ, et al., 2022. VcrNet: visual compensation restoration network for no-reference image quality assessment. IEEE Trans Image Process, 31:1613-1627.

[31]Po LM, Liu MY, Yuen WYF, et al., 2019. A novel patch variance biased convolutional neural network for no-reference image quality assessment. IEEE Trans Circ Syst Video Technol, 29(4):1223-1229.

[32]Ponomarenko N, Jin LN, Ieremeiev O, et al., 2015. Image database TID2013: peculiarities, results and perspectives. Signal Process: Image Commun, 30:57-77.

[33]Qiu ZF, Yao T, Mei T, 2018. Learning deep spatio-temporal dependence for semantic video segmentation. IEEE Trans Multimedia, 20(4):939-949.

[34]Ren HY, Chen DQ, Wang YZ, 2018. RAN4IQA: restorative adversarial nets for no-reference image quality assessment. Proc 32nd AAAI Conf on Artificial Intelligence, p.7308-7314.

[35]Saad MA, Bovik AC, Charrier C, 2012. Blind image quality assessment: a natural scene statistics approach in the dct domain. IEEE Trans Image Process, 21(8):3339-3352.

[36]Sheikh HR, 2003. Image and video quality assessment research at live. http://liveeceutexasedu/research/quality.

[37]Sheikh HR, Bovik AC, Cormack L, 2003. Blind quality assessment of JEPG2000 compressed images using natural scene statistics. Proc 37th Asilomar Conf on Signals, Systems & Computers, p.1403-1407.

[38]Simonyan K, Zisserman A, 2014. Very deep convolutional networks for large-scale image recognition. https://arxiv.org/abs/1409.1556.

[39]Song GH, Jin XG, Chen GL, et al., 2016. Two-level hierarchical feature learning for image classification. Front Inf Technol Electron Eng, 17(9):897-906.

[40]Tang HX, Joshi N, Kapoor A, 2011. Learning a blind measure of perceptual image quality. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.305-312.

[41]Wang Z, Shang XL, 2006. Spatial pooling strategies for perceptual image quality assessment. Proc Int Conf on Image Processing, p.2945-2948.

[42]Wu JJ, Zhang M, Li LD, et al., 2019. No-reference image quality assessment with visual pattern degradation. Inf Sci, 504:487-500.

[43]Xu JT, Ye P, Li QH, et al., 2016. Blind image quality assessment based on high order statistics aggregation. IEEE Trans Image Process, 25(9):4444-4457.

[44]Yang GY, Ding XY, Huang T, et al., 2020. Explicit-implicit dual stream network for image quality assessment. EURASIP J Image Video Process, 2020(1):48.

[45]Ye P, Kumar J, Kang L, et al., 2012. Unsupervised feature learning framework for no-reference image quality assessment. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1098-1105.

[46]Zhang P, Zhou WG, Wu L, et al., 2015. Som: semantic obviousness metric for image quality assessment. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2394-2402.

[47]Zhang SQ, Zhang SL, Huang TJ, et al., 2018. Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans Multimedia, 20(6):1576-1590.

[48]Zhang WX, Ma KD, Yan J, et al., 2020. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans Circ Syst Video Technol, 30(1):36-47.

[49]Zhang WX, Ma KD, Zhai GT, et al., 2021. Uncertainty-aware blind image quality assessment in the laboratory and wild. IEEE Trans Image Process, 30:3474-3486.

[50]Zhou ZH, Lu W, Yang JC, et al., 2020. No-reference image quality assessment based on neighborhood co-occurrence matrix. Signal Process: Image Commun, 81:115680.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE