Simulation of seismic waves is a critical component in the imaging of subsurface structures using actual data, where numerical dispersion remains a challenging task. The finite-difference (FD) approach is popular for solving wave equations because it is simple to implement and requires less memory and computing time due to recursion. However, the staggered grid finite-difference (SGFD) methods have gained popularity due to their improved accuracy and stability. In this study, we introduce an optimization approach using a genetic algorithm (GA) to minimize numerical dispersion. The SGFD coefficients were optimized to reduce numerical errors and improve the accuracy of seismic wave simulations, considering both spatial and temporal domains. Numerical simulations applied to both homogeneous and heterogeneous velocity models demonstrate that the GA-optimized SGFD schemes achieve substantial reductions in dispersion, even with lower-order approximations, when compared to other methods. An important advantage of the proposed method is that it maintains high accuracy while using lower-order approximations, which significantly reduces computational costs. For example, the optimization of 12th-order FD coefficients took approximately 20 s on a standard computer with 64 GB RAM. The findings demonstrate the efficiency of the proposed approach in improving the accuracy and stability of seismic wave simulations, providing a reliable solution for high-resolution seismic imaging.
Seismic exploration faces significant challenges due to the physical parameters and geometric complexity of near-surface layers, making their modeling essential for accurately calculating static corrections. These corrections are crucial for preserving the image of geological structures represented by seismic reflectors. However, obtaining key physical parameters, such as the replacement velocity of the substrate and the velocities and thicknesses of near-surface layers, remains challenging. This study proposes a novel approach that addresses the issue in an alternative way. The innovative calculation method allows the direct computation of static corrections, relying solely on the structural analysis of seismic horizons in the near-trace section. Notably, this approach does not require prior knowledge of the weathered zone model. The application of this method to both simulated and real reflection seismic data demonstrates its potential and effectiveness. The static corrections derived from this approach significantly improve seismic image quality and eliminate abnormal regional static corrections compared to calibrated refraction static corrections. Furthermore, this method does not require calibration with borehole data, simplifying the process and representing a significant advantage over traditional methods. In summary, this innovative approach provides an effective solution to the challenges of near-surface layer modeling, delivering substantial improvements quantitatively—through time and effort savings, and reduced error—and qualitatively by enhancing data quality, ensuring consistency with geological realities, and enabling more reliable geological interpretations.
As seismic signals and artificial blasting signals exhibit high similarity in time-frequency domain features, resulting in insufficient recognition accuracy, we propose a self-organizing map (SOM) neural network classification model based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) multiscale distribution entropy (MDE) feature extraction and Ant Lion Optimization (ALO) algorithm improvement. The multiscale decomposition of the original seismic and blasting signals was carried out using CEEMDAN, and the distribution entropy values of the obtained multiple intrinsic mode functions were calculated to construct multidimensional feature inputs containing complexity information in the time-frequency domain. The ALO algorithm optimized the key parameters of the SOM neural network (competing layer dimensions and number of training iterations), with the root mean squared error serving as the fitness function. The optimal solution obtained by ALO optimization replaced the hyperparameter values in the original model, and multiple prediction rounds were performed on the seismic data test set to address unstable classification performance caused by random initialization in the traditional SOM network. The results revealed that the recognition performance of the CEEMDAN-MDE combined with the ALO-SOM model was significantly improved compared with machine learning models, such as linear discriminant analysis (LDA), decision tree, support vector machine, probabilistic LDA, and AdaBoost. Its recognition accuracy, recall, and F1-score were 99.3373%, 99.1479%, and 99.4557%, respectively, suggesting that this method can serve as a reliable approach for accurately differentiating between natural earthquakes and artificial blasting events, with important application value for seismic monitoring and blasting event exclusion.
Effectively recovering signals buried in noise remains a challenging topic in seismic data denoising. Many conventional methods often fail to accurately capture the characteristics of seismic signals. To address this issue, this study proposed an effective method called variational mode decomposition (VMD)-denoising convolutional neural network (DnCNN). The method first applies VMD to decompose the originally noisy signal into multiple intrinsic mode functions (IMFs) with band-pass characteristics, thereby achieving effective decoupling of different frequency components and noise separation. Selected IMFs are then combined into a multi-channel input and fed into the DnCNN for end-to-end modeling and denoising reconstruction. By decomposing the noisy signal into IMFs corresponding to specific frequency bands and learning them through DnCNN, the network can better extract features within each frequency band. Serving as a front-end filter, the VMD module enhances the network’s ability to represent effective frequency components, suppresses high-frequency random noise, and improves the resolution of weak signals. Experimental results demonstrated that the proposed method effectively captures signal characteristics and recovers signals from both real and synthetic seismic data. In conclusion, the proposed VMD-DnCNN method provides a robust and efficient solution for seismic signal denoising.
Microseismic event location plays a pivotal role in industrial applications, such as coal mining and hydraulic fracturing, by revealing subsurface fracture dynamics through the spatiotemporal analysis of seismic events. As a cornerstone of microseismic monitoring, accurate event localization enables critical insights into underground structural integrity. Traditional arrival-time-based methods employ optimization algorithms to minimize residuals between observed and theoretical arrival times. While this classical approach has proven effective, its accuracy is often compromised by two key limitations: suboptimal initial iteration values and inaccuracies in velocity parameter estimation. To address these challenges, we propose an innovative localization method integrating a grid-searching strategy with a Newton-Raphson-based optimizer. Our approach begins by generating initial iterative vectors—comprising event coordinates and velocity parameters—through a systematic grid-searching technique. Subsequently, the Newton-Raphson optimizer refines these estimates within a four-dimensional search space to achieve high-precision inversion results. The efficacy of the proposed method was rigorously evaluated using both synthetic and field datasets, with comparative analyses conducted against four established localization techniques. Experimental results demonstrate that our method significantly enhances localization accuracy and robustness, reliably inverting both event locations and velocity parameters. These findings provide a valuable technical reference for advancing microseismic monitoring systems, offering improved precision in industrial applications.