Image compressed sensing reconstruction network based on self-attention mechanism
Yuhong LIU , Xiaoyan LIU , Manyin CHEN
Journal of Measurement Science and Instrumentation ›› 2025, Vol. 16 ›› Issue (4) : 537 -546.
For image compression sensing reconstruction, most algorithms use the method of reconstructing image blocks one by one and stacking many convolutional layers, which usually have defects of obvious block effects, high computational complexity, and long reconstruction time. An image compressed sensing reconstruction network based on self-attention mechanism (SAMNet) was proposed. For the compressed sampling, self-attention convolution was designed, which was conducive to capturing richer features, so that the compressed sensing measurement value retained more image structure information. For the reconstruction, a self-attention mechanism was introduced in the convolutional neural network. A reconstruction network including residual blocks, bottleneck transformer (BoTNet), and dense blocks was proposed, which strengthened the transfer of image features and reduced the amount of parameters dramatically. Under the Set5 dataset, when the measurement rates are 0.01, 0.04, 0.10, and 0.25, the average peak signal-to-noise ratio (PSNR) of SAMNet is improved by 1.27, 1.23, 0.50, and 0.15 dB, respectively, compared to the CSNet+. The running time of reconstructing a 256×256 image is reduced by 0.147 3, 0.178 9, 0.231 0, and 0.252 4 s compared to ReconNet. Experimental results showed that SAMNet improved the quality of reconstructed images and reduced the reconstruction time.
convolutional neural network / compressed sensing / self-attention mechanism / dense block / image reconstruction
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
LIM B, |
| [32] |
|
| [33] |
|
/
| 〈 |
|
〉 |