DuRIN: A Deep-unfolded Sparse Seismic Reflectivity Inversion Network
Abstract
We consider the reflection seismology problem of recovering the locations of interfaces and the amplitudes of reflection coefficients from seismic data, which are vital for estimating the subsurface structure. The reflectivity inversion problem is typically solved using greedy algorithms and iterative techniques. Sparse Bayesian learning framework, and more recently, deep learning techniques have shown the potential of data-driven approaches to solve the problem. In this paper, we propose a weighted minimax-concave penalty-regularized reflectivity inversion formulation and solve it through a model-based neural network. The network is referred to as deep-unfolded reflectivity inversion network (DuRIN). We demonstrate the efficacy of the proposed approach over the benchmark techniques by testing on synthetic 1-D seismic traces and 2-D wedge models and validation with the simulated 2-D Marmousi2 model and real data from the Penobscot 3D survey off the coast of Nova Scotia, Canada.
Index Terms:
Geophysics, inverse problems, seismology, seismic reflectivity inversion, geophysical signal processing, deep learning, neural networks, algorithm unrolling, nonconvex optimization, sparse recovery.I Introduction
Reflectivity inversion is an important deconvolution problem in reflection seismology, helpful in characterizing the layered subsurface structure. The subsurface is modeled as having sparse reflectivity localized at the interfaces between two layers, assuming largely horizontal and parallel layers, each with constant impedance [1, 2], or in other words, a piecewise-constant impedance structure. Figure 1 shows a -layer model of the subsurface, with a wet sandstone between two shale layers [3].


The reflection coefficients () (Fig. 1(c)) at the interface between two adjoining layers and (Fig. 1(a)) are related to the subsurface geology [1] through the relation:
(1) |
where and are the density and P-wave velocity, respectively, of the layer. The product of density () and P-wave velocity () is known as the acoustic impedance (Fig. 1(b)).
Such a layered subsurface is often recovered through the widely used convolutional model (2), wherein the observed seismic data is modeled as the convolution between the source pulse (Fig. 1(d)) and the Earth response or reflectivity (Fig. 1(c)) [4, 5]. The observation at the surface is a noisy seismic trace (Fig. 1(e)) given by:
(2) |
where is the measurement noise, denotes convolution, and is assumed to be a Ricker wavelet. When the wavelet is known, the linear inverse problem (2) can be solved within the sparsity framework by modeling the reflectivity as sparse, as the significant interfaces are important in the inversion. Further, the location of the interfaces (in other words, the support recovery) is prioritized over amplitude recovery [5].
I-A Prior Art
We broadly classify the prior work related to reflectivity inversion into two categories, namely, optimization-based techniques and data-driven techniques.
I-A1 Optimization Techniques
The solution in (2) can be recovered by minimizing the classical least-squares (LS) objective function:
(3) |
However, the finite length and measurement noise, and the loss of low and high-frequency information due to convolution of the reflectivity with a bandlimited wavelet [1, 6, 7], lead to non-unique solutions to the above problem [8]. This ill-posed inverse problem is tackled by employing a sparsity prior on the solution [9]. The -norm regularization is widely preferred by virtue of its convexity [1, 4, 10, 11]. The -regularized reflectivity inversion problem is posed as follows:
(4) |
where is the regularization parameter.
Zhang and Castagna [12] adopted basis-pursuit inversion (BPI) to solve the problem in (4), based on the basis-pursuit de-noising algorithm proposed by [13]. Algorithms such as the Iterative Shrinkage-Thresholding Algorithm (ISTA) [14] and its faster variant, FISTA [15], have been proposed to solve the -norm regularized deconvolution problem. Relevant to the problem under consideration, [16] adopted FISTA for reflectivity inversion. Further studies by [17] and [18] adopted FISTA along with debiasing steps of LS inversion and “adding back the residual”, respectively, to improve amplitude recovery after support estimation using FISTA.
The sparsity of seismic data is difficult to estimate in practice and the adequacy of the norm for the reflectivity inversion problem has not been established [8, 19]. Also, -norm regularization results in a biased estimate of [20, 21, 22]. In their application of FISTA to the seismic reflectivity inversion problem, [17] and [18] observed an attenuation of reflection coefficient magnitudes. They adopted post-processing debiasing steps [23] to tackle the bias introduced due to the -norm regularization. Nonconvex regularizers such as the minimax concave penalty (MCP) [24] have been shown to overcome the shortcomings of regularization in inverse problems. The advantages of adopting nonconvex regularizers over have been demonstrated in sparse recovery problems [22, 24, 25]. Particularly pertinent to this discussion is the data-driven -norm regularization for seismic reflectivity inversion proposed by [19], wherein the optimal was chosen based on the input data.
I-A2 Data-driven Methods
Recent works have employed data-driven approaches for solving inverse problems in seismology and geophysics [26, 8, 3, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]. Further, learning-based approaches have also been explored for seismic reflectivity inversion (or reflectivity model building) [3, 28, 34, 43]. Bergen et al. [27] underscored the importance of leveraging the data and model-driven aspects of inverse problems in solid Earth geoscience while incorporating machine learning (ML) into the loop. In a comprehensive review of deep learning approaches for inverse problems in seismology and geophysics, Adler et al. [28] assessed learning-based approaches that enhanced the performance of full waveform inversion [29, 30, 31, 32] and end-to-end seismic inversion [33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. They also reviewed recent solutions based on physics-guided architectures [43, 44] and deep generative modeling [45, 46, 47, 48] that are still in a nascent stage as it pertains to seismic inversion.
The application of sparse Bayesian learning (SBL) for reflectivity inversion has also been explored [26, 8], where the sparse reflection coefficients are recovered by maximizing the marginal likelihood. This is done through a sequential algorithm-based approach (SBL-SA) to update the sparsity-controlling hyperparameters [26], or through the expectation-maximization algorithm (SBL-EM) [8, 49]. The application of neural networks to inversion problems in geophysics [28], particularly reflectivity inversion, is a recent development [3, 34]. Kim and Nakata [34] used an elementary feedforward neural network to recover the sparse reflectivity. They obtained superior support recovery but low amplitude recovery using the neural network compared to a least-squares approach. Further, [3] discussed the suitability of machine learning approaches such as feedforward neural networks [34]. He observed that data-driven techniques outperform conventional deconvolution approaches when knowledge about the underlying geology is limited while also providing a computational advantage. Although, in applications pertaining to geoscience, and specifically, geophysics, the level of model interpretability is critical as one aims to gain physical insights into the system under consideration [27]. Insights into an inverse problem, in addition to savings in terms of computational times, can be gained through deep neural network architectures that are informed by the sparse linear inverse problem itself [27].
To that end, [50] proposed a new class of data-driven framework architectures based on unfolding iterative algorithms into neural network architectures [51]. They proposed learned ISTA (LISTA), based on unrolling the update steps of ISTA [14] into layers of a feedforward neural network. Subsequent studies have demonstrated the efficacy of this class of model-based deep learning architectures in compressed sensing and sparse-recovery applications [52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]. Monga et al. [51] provided a review on algorithm unrolling in the context of imaging, vision and recognition, speech processing, and other signal and image processing problems. Deep-unfolding combines the advantages of both data-driven and iterative techniques.
I-B Motivation and Contribution
The limitations of regularization [20, 21, 22], in addition to the challenge of estimating the sparsity of seismic reflection data [8, 19], can be overcome through nonconvex regularization [19]. However, the corresponding nonconvex cost suffers from local minima, and optimization becomes quite challenging. The hyper-parameters of most iterative techniques are set heuristically, which is likely to yield suboptimal solutions. Machine learning approaches outperform conventional techniques in reflectivity inversion [3], especially in support recovery [34], which is given higher priority [5]. Deep-unrolled architectures [51], which belong to a class of model-based neural networks, are not entirely dissociated from the underlying physics of the problem, which is a potential risk in elementary data-driven approaches such as feedforward neural networks [3, 34, 27]. These factors motivate us to employ a data-driven nonconvex regularization strategy and solve the reflectivity inversion problem through deep-unrolled architectures.
The contributions of this paper are summarized as follows.
-
1.
We construct an optimization cost for sparse seismic reflectivity inversion based on the weighted counterpart of the minimax-concave penalty (MCP) [24], where each component of the sparse reflectivity is associated with a different weight.
-
2.
The resulting optimization algorithm, the iterative firm-thresholding algorithm (IFTA) [58] is unfolded into the deep-unfolded reflectivity inversion network (DuRIN). To the best of our knowledge, model-based architectures have not been explored for solving the seismic reflectivity inversion problem. In fact, deep-unrolling has not been employed for solving seismic inverse problems [28].
-
3.
The efficacy of the proposed formulation with respect to the state of the art is demonstrated over synthetic -D and -D data and on simulated as well as real datasets.
II Weighted-MCP-Regularized Reflectivity Inversion
In this study, we use the weighted counterpart of the nonconvex minimax-concave penalty (MCP) [24] defined as:
(5) |
where , , , and
(6) |
. The trainable parameters of , and , are learned in a data-driven setting. denotes the set of vectors in with entries greater than .




II-A Problem Formulation
Formulating the optimization problem using the weighted MCP regularizer defined above leads to the objective function:
(7) |
The objective function , data-fidelity term , and the regularizer satisfy the following properties, which are essential for minimizing :
-
P1.
is proper, closed, and -smooth, i.e., .
-
P2.
is lower semi-continuous.
-
P3.
is bounded from below, i.e., inf .
Next, we optimize the problem stated in (7) through the Majorization-Minimization (MM) approach [66], and the resulting algorithm, the iterative firm-thresholding algorithm (IFTA) [58]. Unfolding the iterations of IFTA results in a learnable network called deep-unfolded reflectivity inversion network (DuRIN), which has a similar architecture to FirmNet [58].
II-B IFTA: Iterative firm-thresholding algorithm
Due to P1, there exists such that is upper-bounded locally by a quadratic expansion about as:
(8) |
where . The majorizer to the objective function at is given by:
(9) |
such that . The update equation for minimizing to obtain is given by,
(10) |
The proximal operator of the penalty function is the firm-thresholding operator [67] given by
(11) |
The update step at the iteration is given by
(12) |
where is the flipped version of . Algorithm (1) lists the steps involved in solving the optimization problem (7). The update in (12) involves convolutions followed by nonlinear activation (the firm threshold in this case), and can therefore be represented as a layer in a neural network.
II-C DuRIN: Deep-unfolded Reflectivity Inversion Network
As mentioned in the previous section, the update in IFTA (12) can be interpreted as a layer in a neural network, and hence, we unroll the iterations of the algorithm into layers of a neural network to solve the problem given in (7). The proposed deep-unrolled architecture for sparse seismic reflectivity inversion is called Deep-unfolded Reflectivity Inversion Network (DuRIN), which resembles FirmNet [58]. The input-output for each layer in DuRIN is given by
(13) |
where and [50] are initialized as Toeplitz matrices, and are dense and unstructured in the learning stage. The parameters that need to be learned are the matrices and and the thresholds (), subject to , where is the null vector and is the vector of all ones. For training, we employ the smooth loss as the training cost, computed between the true reflectivity and prediction . In our experimental setup, , and it is not a trainable parameter.
To improve amplitude recovery of DuRIN during inference, we re-estimate the amplitudes of the estimated sparse vector over the supports given by DuRIN as: , where is the support, or the non-zero locations, of , and is the pseudo-inverse of the Toeplitz matrix of the kernel over the support . The steps of DuRIN for layers are listed in Algorithm 2. Figure 4 illustrates a 3-layer architecture for DuRIN.


We consider two variants of DuRIN, depending on the value of . The generalized variant of DuRIN based on IFTA [58], where are different across components and layers, is referred to as DuRIN-, and corresponds to FirmNet [58]. As , the firm-thresholding function approaches the soft thresholding function [22]. We call this variant DuRIN-, which resembles LISTA [50].
III Experimental Results
We present experimental results for the proposed networks, namely, DuRIN- and DuRIN-, on synthetic -D traces and -D wedge models [68], the simulated -D Marmousi model [69], and real -D field data from the Penobscot D survey off the coast of Nova Scotia, Canada [70]. We evaluate the performance of DuRIN- and DuRIN- in comparison with the benchmark techniques, namely, basis-pursuit inversion (BPI) [13, 12], fast iterative shrinkage-thresholding algorithm (FISTA) [15, 16], and expectation-maximization-based sparse Bayesian learning (SBL-EM) [49, 8]. To quantify the performance, we employ the objective metrics listed in the following section.
III-A Objective Metrics
We evaluate the results using statistical parameters that measure the amplitude and support recovery between the ground-truth sparse vector and the predicted sparse vector .
- 1.
-
2.
Relative Reconstruction Error (RRE) and Signal-to-Reconstruction Error Ratio (SRER), defined as follows:
-
3.
Probability of Error in Support (PES):
where denotes the cardinality of the argument, and and denote the supports of and , respectively.
III-B Training Phase
The generation or acquisition of training data appropriately representing observed seismic data is crucial for the application of a data-driven approach to the reflectivity inversion problem [34]. We generate synthetic training data as seismic traces, each of length samples, obtained by the convolution between -D reflectivity profiles and a Ricker wavelet. The reflectivity profiles consist of samples each, and are padded with zeros before convolution such that their length is equal to that of the seismic traces [3]. Amplitudes of the reflection coefficients range from to , and the sparsity factor (), i.e., the ratio of the number of non-zero elements to the total number of elements in a trace, is set to . The locations of the reflection coefficients are picked uniformly at random, with no constraint on the minimum spacing between two reflectors/spikes, or in other words, the reflectors can be away only by one sampling interval. The reflectivity values, chosen randomly from the defined range of to , are assigned to the selected locations. The sampling interval, the increment in the amplitudes of the reflection coefficients, and the wavelet frequency vary based on the dataset.
The initial hyperparameters and the optimum number of layers in the network also vary depending on the parameters of the dataset, such as the sampling interval and the wavelet frequency. We initially consider six layers for both DuRIN- and DuRIN- for training and keep appending five layers at a time to arrive at the optimum number of layers. For the training phase, we set the batch size to , use the ADAM optimizer [72] and set the learning rate to , with an input measurement signal-to-noise ratio (SNR) of dB to ensure the robustness of the proposed models against noise in the testing data [34]. The proposed networks, DuRIN- and DuRIN-, are trained on -D data, and operate on -D, -D, and -D data trace-by-trace. In the following sections, we provide experimental results on synthetic, simulated, and real data.
III-C Testing Phase: Synthetic Data
III-C1 Synthetic 1-D Traces
We validated the performance of DuRIN- and DuRIN- on realizations of synthetic -D traces, with a -Hz Ricker wavelet, -ms sampling interval, and amplitude increment . Table I shows the comparison of objective metrics for the proposed networks and the benchmark techniques. DuRIN- and DuRIN- recover the amplitudes with higher accuracy than the compared methods, quantified by the objective metrics CC, RRE, and SRER, with considerably superior support recovery indicated by PES. Figure 5 shows the corresponding illustration for a sample -D seismic trace of the test realizations. The figure illustrates two main advantages of the proposed networks. Both DuRIN- and DuRIN- resolve closely-spaced reflectors right after ms on the time axis in Figure 5 (e) and (f) and do not introduce spurious supports as the benchmark techniques.
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |


Table II gives a comparison of the computational testing time over , , , and test realizations of synthetic -D traces. DuRIN- and DuRIN- require lower computational time than the benchmark techniques, over two orders of magnitude lower than FISTA, the next best technique to our proposed networks. Lower computation times are significant for reflection seismic processing, where the size of datasets analyzed is large. We note that the lower computational testing times for DuRIN- and DuRIN- come at the expense of longer training times, where the models are trained on a large number of synthetic seismic traces.
R = number realizations.
Method | Testing time () | |||
---|---|---|---|---|
R | R | R | R | |
BPI | ||||
SBL-EM | ||||
FISTA | ||||
DuRIN- | ||||
DuRIN- |
Figure 6 illustrates the computational advantage of the proposed DuRIN- and DuRIN- over the benchmark techniques in terms of layers/iterations. The -layer DuRIN- and DuRIN- models outperform the benchmark techniques in terms of the four objective metrics considered in this study, with their performance further enhanced by adding layers, up to layers. We observe that the performance starts deteriorating after layers (Figure 6). We hypothesize that this deterioration could be attributed to the increase in the number of network parameters while keeping the size of the training dataset fixed, leading to overfitting. Future work could explore if increasing the training examples allows us to append more layers to enhance the networks’ performance further rather than diminish it.


III-C2 Synthetic 2-D Wedge Models
We use synthetic wedge models to test the resolving capability of the proposed networks on thin beds [68]. Wedge models usually consist of two interfaces, one horizontal and another inclined, with the wedge thinning to ms. Depending on the polarity of the reflection coefficients of the two interfaces being either the same or opposite, we have even or odd wedge models, respectively. Further, based on the polarity being negative or positive, we obtain four possible combinations of polarities of the reflection coefficients of the wedge models (NP, PN, NN, PP). In our experimental setup, we consider wedge models with seismic traces, with the separation between the two interfaces reducing from ms to ms in ms increments, and amplitudes of the reflection coefficients set to .
Table III and Figure 7 show the performance of the proposed networks in comparison with the benchmark techniques for an odd wedge model (NP), with negative reflection coefficients on the upper horizontal interface (N) and positive on the lower inclined interface (P). DuRIN- and DuRIN- outperform the other methods in terms of both amplitude and support recovery metrics (Table III). As highlighted in Figure 7, BPI, FISTA, and SBL-EM fail to resolve the reflectors below m wedge thickness, which is below the tuning thickness of ms, corresponding to m in our experimental setup, for a Hz wavelet frequency [73]. This low resolving capability also contributes to the poorer (higher) PES scores for these techniques (Table III). On the other hand, the proposed DuRIN- and DuRIN- maintain the lateral continuity of both interfaces even below the tuning thickness.
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |


Table IV and Figure 8 show the results for an odd wedge model (PN) with positive polarity of reflection coefficients on the upper, horizontal interface (P) and negative polarity on the lower, inclined interface (N). Table IV shows the proposed DuRIN- and DuRIN- outperforming the benchmark techniques, namely, BPI, FISTA, and SBL-EM, in terms of the amplitude and support recovery metrics considered in this study. Similar to the NP wedge model, and as highlighted in Figure 8, the benchmark techniques do not accurately resolve the reflectors below a wedge thickness of m, whereas DuRIN- and DuRIN- preserve lateral continuity of both the interfaces of the wedge model below tuning thickness.
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |


For the even wedge models, i.e., NN (Table V and Figure 9) and PP (Table VI and Figure 10), all techniques exhibit comparable performance in terms of the amplitude metrics of CC and RRE, and resolve reflectors below tuning thickness without any divergence from the ground-truth reflector locations. The proposed networks, namely, DuRIN- and DuRIN- outperform the benchmark techniques in SRER and PES scores.
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |


Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |


III-D Testing Phase: Marmousi2 Model
The Marmousi model [69] (width depth: km km) is an expansion of the original Marmousi model [74] (width depth: km km depth). The model provides a simulated -D collection of seismic traces, and is widely used in reflection seismic processing for the calibration of techniques in structurally complex settings. The model has a ms sampling interval, with traces at an interval of m. We obtained the ground-truth reflectivity profiles from the P-wave velocity and density models (1), and convolved them with a Hz Ricker wavelet to generate the input data , with measurement SNR of dB to test the robustness of the proposed networks to noisy conditions.
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |
Method | CC | RRE | SRER | PES |
---|---|---|---|---|
BPI | ||||
FISTA | ||||
SBL-EM | ||||
DuRIN- | ||||
DuRIN- |
Here, we present results from a region of the Marmousi model that contains a gas-charged sand channel [69]. Table VII shows a comparison of the performance of the proposed DuRIN- and DuRIN- with that of the benchmark techniques for the portion containing the gas-charged sand channel. The Marmousi model [69] has a large number of low-amplitude reflections. Additionally, in Figure 5, we observe that BPI and SBL-EM introduce spurious reflections. We attribute the low PES scores observed for BPI and SBL-EM in Table VII to such spurious supports complementing the low-amplitude reflections in the Marmousi model. We re-evaluate the objective metrics after muting reflections with amplitudes of the absolute of the maximum amplitude, the results for which are reported in Table VIII and Figure 11. The re-evaluation did not affect the CC, RRE, and SRER scores adversely, but, as the PES is computed over significant reflections, it is higher (worse) for BPI and SBL-EM, and lower (better) for DuRIN- and DuRIN-. The objective metrics reported in Table VIII show superior amplitude and support recovery accuracy of the proposed networks over the benchmark methods. Figure 11 shows that DuRIN- and DuRIN-, along with SBL-EM, preserve the lateral continuity of interfaces, highlighted in the insets at the edge of the sand channel. The insets also show that BPI and FISTA introduce spurious false interfaces due to the interference of multiple events. DuRIN- and DuRIN- show limited recovery of the low-amplitude reflection coefficients right below the gas-charged sand channel (Figure 11 (f) and (g)), which could be investigated in future work.


III-E Testing Phase: Real Data
To validate the proposed networks on real data, we use a -D seismic volume from the Penobscot D survey off the coast of Nova Scotia, Canada [70]. We present results from a smaller region of the -D
volume, with Inlines (-) and Xlines (-), a region that includes the two wells of this survey (wells L- and B-, [75]). Along the time axis, the region contains samples between the time interval of to ms, with a ms sampling interval, and a Hz Ricker wavelet fits the data well [75].
Figure 12 shows the observed seismic data and recovered reflectivity profiles for Xline of the survey, and overlaid in black, the seismic and reflectivity profiles, respectively, computed from the sonic logs of well L- [75]. The inverted reflectivity profiles for BPI and FISTA are smooth and lack detail. On the other hand, the predicted reflectivity profiles generated using SBL-EM, DuRIN- and DuRIN- provide more details for characterizing the subsurface. Additionally, DuRIN- and DuRIN- also resolve closely-spaced interfaces better and preserve the lateral continuity, evident from the interfaces around s on the time axis (Figure 12 (e) and (f)). These results demonstrate the capability of the proposed networks, namely, DuRIN- and DuRIN-, on real data from the field. We note that the models are trained on synthesized seismic traces before being tested on the real data.


IV Conclusions
We developed a nonconvex optimization cost for reflectivity inversion based on a weighted counterpart of the minimax concave penalty. We also developed the deep-unfolded reflectivity inversion network (DuRIN) for solving the problem under consideration. Experimental validation on synthetic, simulated, and real data demonstrated the superior resolving capability of DuRIN, especially in support recovery (represented by PES), which is crucial for characterizing the subsurface. DuRIN also preserves lateral continuity of interfaces despite being a single-trace approach. However, the trade-off between the number of layers of the network and training examples, and the suboptimal recovery of low-amplitude reflection coefficients require further attention. One could also consider data-driven prior learning [27] based on a multi-penalty formulation [64] for solving the reflectivity inversion problem.
Acknowledgment
This work is supported by the Ministry of Earth Sciences, Government of India; Centre of Excellence in Advanced Mechanics of Materials, Indian Institute of Science (IISc), Bangalore; and Science and Engineering Research Board (SERB), India. The authors would like to thank Jishnu Sadasivan for fruitful technical discussions.
References
- [1] D. W. Oldenburg, T. Scheuer, and S. Levy, “Recovery of the acoustic impedance from reflection seismograms,” Geophysics, vol. 48, no. 10, pp. 1318–1337, 1983.
- [2] Ö. Yilmaz, Seismic Data Analysis: Processing, Inversion, and Interpretation of Seismic Data. Society of Exploration Geophysicists, 2001.
- [3] B. Russell, “Machine learning and geophysical inversion — A numerical study,” The Leading Edge, vol. 38, no. 7, pp. 512–519, 2019.
- [4] H. L. Taylor, S. C. Banks, and J. F. McCoy, “Deconvolution with the norm,” Geophysics, vol. 44, no. 1, pp. 39–52, 1979.
- [5] P. M. Shearer, Introduction to Seismology. Cambridge Univ. Press, 2009.
- [6] A. J. Berkhout, “Least-squares inverse filtering and wavelet deconvolution,” Geophysics, vol. 42, no. 7, pp. 1369–1383, 1977.
- [7] H. W. J. Debeye and P. Van Riel, “-norm deconvolution,” Geophysical Prospecting, vol. 38, no. 4, pp. 381–403, 1990.
- [8] C. Yuan and M. Su, “Seismic spectral sparse reflectivity inversion based on SBL-EM: experimental analysis and application,” J. Geophysics and Engineering, vol. 16, no. 6, pp. 1124–1138, 2019.
- [9] A. Tarantola, Inverse Problem Theory and Methods for Model Parameter Estimation. Society for Industrial and Applied Mathematics, 2005.
- [10] L. Liu and W. Lu, “A fast linear estimator and its application on predictive deconvolution,” IEEE Geoscience and Remote Sensing Lett., vol. 12, no. 5, pp. 1056–1060, 2014.
- [11] G. Zhang and J. Gao, “Inversion-driven attenuation compensation using synchrosqueezing transform,” IEEE Geoscience and Remote Sensing Lett., vol. 15, no. 1, pp. 132–136, 2017.
- [12] R. Zhang and J. Castagna, “Seismic sparse-layer reflectivity inversion using basis pursuit decomposition,” Geophysics, vol. 76, no. 6, pp. R147–R158, 2011.
- [13] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001.
- [14] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Communications on Pure and Applied Mathematics, vol. 57, no. 11, pp. 1413–1457, 2004.
- [15] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009.
- [16] D. O. Pérez, D. R. Velis, and M. D. Sacchi, “Inversion of prestack seismic data using FISTA,” Mecánica Computacional, vol. 31, no. 20, pp. 3255–3263, 2012.
- [17] D. O. Pérez, D. R. Velis, and M. D. Sacchi, “High-resolution prestack seismic inversion using a hybrid FISTA least-squares strategy,” Geophysics, vol. 78, no. 5, pp. R185–R195, 2013.
- [18] C. Li, X. Liu, K. Yu, X. Wang, and F. Zhang, “Debiasing of seismic reflectivity inversion using basis pursuit de-noising algorithm,” J. Applied Geophysics, vol. 177, p. 104028, 2020.
- [19] F. Li, R. Xie, W.-Z. Song, and H. Chen, “Optimal seismic reflectivity inversion: Data-driven -loss--regularization sparse regression,” IEEE Geoscience and Remote Sensing Lett., vol. 16, no. 5, pp. 806–810, 2019.
- [20] E. J. Candès, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted minimization,” J. Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 877–905, 2008.
- [21] T. Zhang, “Analysis of multi-stage convex relaxation for sparse regularization,” J. Machine Learning Research, vol. 11, no. 3, 2010.
- [22] I. Selesnick, “Sparse regularization via convex analysis,” IEEE Trans. Signal Process., vol. 65, no. 17, pp. 4481–4494, 2017.
- [23] S. J. Wright, R. D. Nowak, and M. A. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Signal Process., vol. 57, no. 7, pp. 2479–2493, 2009.
- [24] C.-H. Zhang, “Nearly unbiased variable selection under minimax concave penalty,” The Annals of Statistics, vol. 38, no. 2, pp. 894–942, 2010.
- [25] J. Woodworth and R. Chartrand, “Compressed sensing recovery via nonconvex shrinkage penalties,” Inverse Problems, vol. 32, no. 7, p. 075004, 2016.
- [26] S. Yuan and S. Wang, “Spectral sparse Bayesian learning reflectivity inversion,” Geophysical Prospecting, vol. 61, no. 4, pp. 735–746, 2013.
- [27] K. J. Bergen, P. A. Johnson, M. V. de Hoop, and G. C. Beroza, “Machine learning for data-driven discovery in solid earth geoscience,” Science, vol. 363, no. 6433, 2019.
- [28] A. Adler, M. Araya-Polo, and T. Poggio, “Deep learning for seismic inverse problems: Toward the acceleration of geophysical analysis workflows,” IEEE Signal Process. Mag., vol. 38, no. 2, pp. 89–119, 2021.
- [29] W. Lewis and D. Vigh, “Deep learning prior models from seismic images for full-waveform inversion,” Proc. SEG Technical Program Expanded Abstracts, pp. 1512–1517, 2017.
- [30] A. Richardson, “Seismic full-waveform inversion using deep learning tools and techniques,” 2018, arXiv:1801.07232 [Online]. .
- [31] O. Ovcharenko, V. Kazei, M. Kalita, D. Peter, and T. Alkhalifah, “Deep learning for low-frequency extrapolation from multioffset seismic data,” Geophysics, vol. 84, no. 6, pp. R989–R1001, 2019.
- [32] H. Sun and L. Demanet, “Extrapolated full-waveform inversion with deep learning,” Geophysics, vol. 85, no. 3, pp. R275–R288, 2020.
- [33] M. Araya-Polo, J. Jennings, A. Adler, and T. Dahlke, “Deep-learning tomography,” The Leading Edge, vol. 37, no. 1, pp. 58–66, 2018.
- [34] Y. Kim and N. Nakata, “Geophysical inversion versus machine learning in inverse problems,” The Leading Edge, vol. 37, no. 12, pp. 894–901, 2018.
- [35] W. Wang, F. Yang, and J. Ma, “Velocity model building with a modified fully convolutional network,” Proc. SEG Technical Program Expanded Abstracts, pp. 2086–2090, 2018.
- [36] Y. Wu, Y. Lin, and Z. Zhou, “Inversionnet: Accurate and efficient seismic waveform inversion with convolutional neural networks,” Proc. SEG Technical Program Expanded Abstracts 2018, pp. 2096–2100, 2018.
- [37] F. Yang and J. Ma, “Deep-learning inversion: A next-generation seismic velocity model building method,” Geophysics, vol. 84, no. 4, pp. R583–R599, 2019.
- [38] A. Adler, M. Araya-Polo, and T. Poggio, “Deep recurrent architectures for seismic tomography,” Proc. 81st EAGE Conference and Exhibition, vol. 2019, no. 1, pp. 1–5, 2019.
- [39] V. Das, A. Pollack, U. Wollner, and T. Mukerji, “Convolutional neural network for seismic impedance inversion,” Geophysics, vol. 84, no. 6, pp. R869–R880, 2019.
- [40] M. J. Park and M. D. Sacchi, “Automatic velocity analysis using convolutional neural network and transfer learning,” Geophysics, vol. 85, no. 1, pp. V33–V43, 2020.
- [41] B. Wu, D. Meng, L. Wang, N. Liu, and Y. Wang, “Seismic impedance inversion using fully convolutional residual network and transfer learning,” IEEE Geoscience and Remote Sensing Lett., vol. 17, no. 12, pp. 2140–2144, 2020.
- [42] W. Wang and J. Ma, “Velocity model building in a crosswell acquisition geometry with image-trained artificial neural networks,” Geophysics, vol. 85, no. 2, pp. U31–U46, 2020.
- [43] R. Biswas, M. K. Sen, V. Das, and T. Mukerji, “Prestack and poststack inversion using a physics-guided convolutional neural network,” Interpretation, vol. 7, no. 3, pp. SE161–SE174, 2019.
- [44] M. Alfarraj and G. AlRegib, “Semi-supervised learning for acoustic impedance inversion,” Proc. SEG Technical Program Expanded Abstracts, pp. 2298–2302, 2019.
- [45] M. Araya-Polo, S. Farris, and M. Florez, “Deep learning-driven velocity model building workflow,” The Leading Edge, vol. 38, no. 11, pp. 872a1–872a9, 2019.
- [46] Y. Wang, Q. Ge, W. Lu, and X. Yan, “Seismic impedance inversion based on cycle-consistent generative adversarial network,” Proc. SEG Technical Program Expanded Abstracts, pp. 2498–2502, 2019.
- [47] L. Mosser, O. Dubrule, and M. J. Blunt, “Stochastic seismic waveform inversion using generative adversarial networks as a geological prior,” Mathematical Geosciences, vol. 52, no. 1, pp. 53–79, 2020.
- [48] Z. Zhang and Y. Lin, “Data-driven seismic waveform inversion: A study on the robustness and generalization,” IEEE Trans. on Geoscience and Remote sensing, vol. 58, no. 10, pp. 6900–6913, 2020.
- [49] D. P. Wipf and B. D. Rao, “Sparse Bayesian learning for basis selection,” IEEE Trans. Signal Process., vol. 52, no. 8, pp. 2153–2164, 2004.
- [50] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” Proc. 27th Intl. Conf. on Machine Learning, pp. 399–406, 2010.
- [51] V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Mag., vol. 38, no. 2, pp. 18–44, 2021.
- [52] D. Mahapatra, S. Mukherjee, and C. S. Seelamantula, “Deep sparse coding using optimized linear expansion of thresholds,” 2017, arXiv:1705.07290 [Online].
- [53] S. Mukherjee, D. Mahapatra, and C. S. Seelamantula, “DNNs for sparse coding and dictionary learning,” Proc. NIPS Bayesian Deep Learning Workshop, 2017.
- [54] J. Zhang and B. Ghanem, “ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing,” Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 1828–1837, 2018.
- [55] M. Borgerding, P. Schniter, and S. Rangan, “AMP-inspired deep networks for sparse linear inverse problems,” IEEE Trans. Signal Process., vol. 65, no. 16, pp. 4293–4308, 2017.
- [56] H. Sreter and R. Giryes, “Learned convolutional sparse coding,” Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Process., pp. 2191–2195, 2018.
- [57] R. Liu, S. Cheng, L. Ma, X. Fan, and Z. Luo, “Deep proximal unrolling: Algorithmic framework, convergence analysis and applications,” IEEE Trans. Image Process., vol. 28, no. 10, pp. 5013–5026, 2019.
- [58] P. K. Pokala, A. G. Mahurkar, and C. S. Seelamantula, “FirmNet: A sparsity amplified deep network for solving linear inverse problems,” Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Process., pp. 2982–2986, 2019.
- [59] Y. Li, M. Tofighi, J. Geng, V. Monga, and Y. C. Eldar, “Efficient and interpretable deep blind image deblurring via algorithm unrolling,” IEEE Trans. on Computational Imaging, vol. 6, pp. 666–681, 2020.
- [60] N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based deep learning,” 2020, arXiv:2012.08405 [Online].
- [61] P. K. Pokala, P. K. Uttam, and C. S. Seelamantula, “ConFirmNet: Convolutional FirmNet and application to image denoising and inpainting,” Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Process., pp. 8663–8667, 2020.
- [62] L. Wang, C. Sun, M. Zhang, Y. Fu, and H. Huang, “DNU: deep non-local unrolling for computational spectral imaging,” Proc. IEEE /CVF Conf. Comp. Vis. and Patt. Recog., pp. 1658–1668, 2020.
- [63] B. Tolooshams, S. Dey, and D. Ba, “Deep residual autoencoders for expectation maximization-inspired dictionary learning,” IEEE Trans. on Neural Networks and Learning Systems, 2020.
- [64] D. Jawali, P. K. Pokala, and C. S. Seelamantula, “CORNET: Composite-regularized neural network for convolutional sparse coding,” Proc. IEEE Intl. Conf. on Image Processing, pp. 818–822, 2020.
- [65] B. Tolooshams, S. Mulleti, D. Ba, and Y. C. Eldar, “Unfolding neural networks for compressive multichannel blind deconvolution,” 2021, arXiv:2010.11391 [Online].
- [66] M. A. T. Figueiredo, J. M. Bioucas-Dias, and R. D. Nowak, “Majorization–minimization algorithms for wavelet-based image restoration,” IEEE Trans. Image Process., vol. 16, no. 12, pp. 2980–2991, 2007.
- [67] S. Voronin and H. J. Woerdeman, “A new iterative firm-thresholding algorithm for inverse problems with sparsity constraints,” Applied and Computational Harmonic Analysis, vol. 35, no. 1, pp. 151–164, 2013.
- [68] W. Hamlyn, “Thin beds, tuning, and AVO,” The Leading Edge, vol. 33, no. 12, pp. 1394–1396, 2014.
- [69] G. S. Martin, R. Wiley, and K. J. Marfurt, “Marmousi2: An elastic upgrade for Marmousi,” The Leading Edge, vol. 25, no. 2, pp. 156–166, 2006.
- [70] O. S. R., dGB Earth Sciences, “Penobscot 3D - Survey,” 2017, data retrieved from https://terranubis.com/datainfo/Penobscot.
- [71] D. Freedman, R. Pisani, and R. Purves, Statistics (international student edition). WW Norton & Company, New York, 2007.
- [72] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014, arXiv:1412.6980 [Online].
- [73] H. Chung and D. C. Lawton, “Frequency characteristics of seismic reflections from thin beds,” Canadian J. Exploration Geophysics, vol. 31, no. 1, pp. 32–37, 1995.
- [74] R. Versteeg, “The Marmousi experience: velocity model determination on a synthetic complex data set,” The Leading Edge, vol. 13, no. 9, pp. 927–936, 1994.
- [75] E. Bianco, “Geophysical tutorial: Well-tie calculus,” The Leading Edge, vol. 33, no. 6, pp. 674–677, 2014.