A multimodal dense convolution network for blind image quality assessment
Nandhini CHOCKALINGAM, Brindha MURUGAN
A multimodal dense convolution network for blind image quality assessment
Technological advancements continue to expand the communications industry’s potential. Images, which are an important component in strengthening communication, are widely available. Therefore, image quality assessment (IQA) is critical in improving content delivered to end users. Convolutional neural networks (CNNs) used in IQA face two common challenges. One issue is that these methods fail to provide the best representation of the image. The other issue is that the models have a large number of parameters, which easily leads to overfitting. To address these issues, the dense convolution network (DSC-Net), a deep learning model with fewer parameters, is proposed for no-reference image quality assessment (NR-IQA). Moreover, it is obvious that the use of multimodal data for deep learning has improved the performance of applications. As a result, multimodal dense convolution network (MDSC-Net) fuses the texture features extracted using the gray-level co-occurrence matrix (GLCM) method and spatial features extracted using DSC-Net and predicts the image quality. The performance of the proposed framework on the benchmark synthetic datasets LIVE, TID2013, and KADID-10k demonstrates that the MDSC-Net approach achieves good performance over state-of-the-art methods for the NR-IQA task.
No-reference image quality assessment (NR-IQA) / Blind image quality assessment / Multimodal dense convolution network (MDSC-Net) / Deep learning / Visual quality / Perceptual quality
/
〈 | 〉 |