A physics-constrained autoencoder for full-waveform inversion using axial self-attention
Yunbo Niu , Yingming Qu , Zhenchun Li
Journal of Seismic Exploration ›› 2026, Vol. 35 ›› Issue (1) : 269 -281.
Full-waveform inversion (FWI) is highly sensitive to the initial model and low-frequency content, and it often suffers from cycle skipping and degraded resolution in complex media. We propose a physics-constrained autoencoder-based FWI with axial self-attention (AxPCAE-FWI). In a unified encoder-decoder architecture, a differentiable acoustic wave-equation solver is explicitly embedded, and the data-domain waveform misfit is used as the primary objective, so that training is consistently governed by wave physics and does not rely on paired seismic-velocity labels. The encoder extracts inversion-relevant, low-dimensional features, while the decoder reconstructs physically admissible velocity models. To capture long-range spatiotemporal dependencies in the time-offset plane, axial multi-head self-attention is introduced in the encoder, where global attention is computed separately along the time and receiver axes; two one-dimensional global attentions approximate a single two-dimensional (2D) global attention, reducing the computational complexity relative to full 2D attention while preserving global context. This design improves the representation of complex wavefield phenomena, including multiples, converted waves, and far-offset reflections, thereby alleviating cycle skipping when low-frequency information is limited. Numerical experiments on the Marmousi2 and Society of Exploration Geophysicists salt-dome models demonstrate stable convergence and high structural similarity with improved geological plausibility. Compared to conventional physics-informed adaptive extended FWI under the same iteration budget, AxPCAE-FWI yields clearer salt-body boundaries and better imaging of structurally complex regions, with improved robustness to noise.
Full-waveform inversion / Deep learning / Self-attention / Physics-constrained autoencoder / Autoencoder
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
/
| 〈 |
|
〉 |