A multimodal dense convolution network for blind image quality assessment

Nandhini CHOCKALINGAM, Brindha MURUGAN

PDF(8518 KB)
PDF(8518 KB)
Front. Inform. Technol. Electron. Eng ›› 2023, Vol. 24 ›› Issue (11) : 1601-1615. DOI: 10.1631/FITEE.2200534
Orginal Article
Orginal Article

A multimodal dense convolution network for blind image quality assessment

Author information +
History +

Abstract

Technological advancements continue to expand the communications industry’s potential. Images, which are an important component in strengthening communication, are widely available. Therefore, image quality assessment (IQA) is critical in improving content delivered to end users. Convolutional neural networks (CNNs) used in IQA face two common challenges. One issue is that these methods fail to provide the best representation of the image. The other issue is that the models have a large number of parameters, which easily leads to overfitting. To address these issues, the dense convolution network (DSC-Net), a deep learning model with fewer parameters, is proposed for no-reference image quality assessment (NR-IQA). Moreover, it is obvious that the use of multimodal data for deep learning has improved the performance of applications. As a result, multimodal dense convolution network (MDSC-Net) fuses the texture features extracted using the gray-level co-occurrence matrix (GLCM) method and spatial features extracted using DSC-Net and predicts the image quality. The performance of the proposed framework on the benchmark synthetic datasets LIVE, TID2013, and KADID-10k demonstrates that the MDSC-Net approach achieves good performance over state-of-the-art methods for the NR-IQA task.

Keywords

No-reference image quality assessment (NR-IQA) / Blind image quality assessment / Multimodal dense convolution network (MDSC-Net) / Deep learning / Visual quality / Perceptual quality

Cite this article

Download citation ▾
Nandhini CHOCKALINGAM, Brindha MURUGAN. A multimodal dense convolution network for blind image quality assessment. Front. Inform. Technol. Electron. Eng, 2023, 24(11): 1601‒1615 https://doi.org/10.1631/FITEE.2200534

RIGHTS & PERMISSIONS

2023 Zhejiang University Press
PDF(8518 KB)

Accesses

Citations

Detail

Sections
Recommended

/