Professional Documents
Culture Documents
Inversion of Time-Lapse Surface Gravity Data For D
Inversion of Time-Lapse Surface Gravity Data For D
1
TEEP Research and Technology USA, Houston, TX 77002, USA
2
TotalEnergies, France
1 Introduction
Greenhouse Gas (GHG) reduction commitments aim to address the increase
of atmospheric CO2 concentration implemented on a large scale. They include
technologies such as fossil fuel consumption reduction (improvement of energy
efficiency, application of new energy sources), expansion of absorption sources
through afforestation/reforestation, and carbon dioxide capture and storage (CCS).
One of the CCS technologies currently being deployed worldwide is CO2 ge-
ological storage. The technology involves separating and capturing CO2 emitted
from large-scale fixed sources, such as cement, petrochemical or coal-fired power
plants, and storing it in saline formations located deep underground (hereinafter
referred to as ”aquifers”). It holds promise as the most practical, immediate-
effect, and near-term technology because a range of technologies accumulated in
fields such as oil-well drilling, underground storage of natural gas, and Enhanced
Oil Recovery (EOR) are readily adaptable.
Ongoing monitoring of these geological storage sites is mandated by regula-
tory authorities and will need to demonstrate volume storage integrity over long
2 A. Celaya et al.
borehole gravity data. However, borehole data is highly localized and does not
give a global picture of the storage site.
2 Data Preparation
Located 60km offshore the west coast of Norway, the Johansen formation is
a candidate CO2 storage site whose theoretical capacity exceeds 1Gt CO2 . It
consists of a roughly 100 m thick aquifer that extends up to 100 km in the
north-south direction and is 60 km wide. The depth ranges from 2200 to 3100m
below sea level, which offers the perfect conditions of pressure and temperature
to inject CO2 in a supercritical state.
As part of the MatMoRA project, a corresponding reservoir model is made
available online by SINTEF [15]. Starting from this initial setup, we use geo-
statistics to generate 500 new geological realizations that all differ in terms of
porosity and permeability. In this work, we relied on a Fast Gaussian procedure
(as detailed in [28]), whose main parameters are given in Table 1.
For each realization, we carry out a fluid flow simulation to model the evolu-
tion of the CO2 saturation plume within the reservoir. The scenario consists of a
100-year-period of CO2 injection at a constant rate of 14400 m3 /day, followed by
a 400-year-period of migration. We perform these simulations using the MRST
toolbox [25], which implements a Vertical Equilibrium model.
4 A. Celaya et al.
Std 0.03
Porosity Lower Bound 0.10
Upper Bound 0.40
Std 2.00
Permeability (log) Lower Bound -5.0
Upper Bound 10.0
Mean 26.0
Correlation Length
Std 2.00
Correlation Porosity-Permeability 0.30
Table 1: Parameters for geostatistics simulation of Johansen formation.
Assuming that the only changes in density occur within the reservoir (with
CO2 replacing brine), one can then write the time-lapse bulk density evolution
as:
∆ρ = φ∆SCO2 (ρCO2 − ρbrine ) (1)
where φ is the porosity, ∆SCO2 is the change in CO2 saturation, ρCO2 and ρbrine
stand for the CO2 and brine density, respectively (see [17]).
We select a single time step from each simulation to create our final dataset.
For the first 100 simulations, we randomly select a time step from the first
100 years of injection. We randomly select time steps for the remaining 400
simulations across all 500 years. Figure 1 shows the distribution of the time
steps in our final dataset.
Fig. 1: Distribution of time steps for our final training set of 500 CO2 plume
models. For the first 100 models, we randomly select time steps from the first
100 years of injection. For the remaining models, we randomly select time steps
from all 500 years of simulation.
To reduce the computational cost (i.e., memory and time) and to accommo-
date a larger batch size during training for our DL-based methods, we resample
each 3D plume volume from a size of 440×530×145 to a size of 128×128×128.
Time-Lapse Gravity Inversion via Deep Learning 5
3 Methods
In this section, we first introduce traditional gravity modeling since it is used to
generate the surface gravity maps that correspond to the CO2 plumes described
in Section 2. We then describe conventional L2 inversion in Section 3.2. Section
3.3, outlines our proposed DL-based approach for surface gravity inversion.
Fig. 2: Computed 2D change in surface gravity map (left) for a given 3D differ-
ence in density map (right).
by subtracting its mean and dividing by its standard deviation (i.e., z-score
normalization).
Due to the ill-posed nature of gravity inversion (i.e., several anomalies can
produce the same gravity response), it is not sufficient to solve (3) in an un-
constrained setting. Figure 3 illustrates the results of unconstrained L2 inver-
sion with a null model as an initial guess. While the predicted density anomaly
produces a gravity response that fits the original data, it does not match the
true density anomaly. To overcome this challenge, we constrain (3) so that only
cells within the known reservoir mask are updated during minimization. We use
GEOSX for all L2 inversions [36].
Fig. 3: (Top) True CO2 plume. (Bottom) Predicted CO2 plume from uncon-
strained L2 inversion with a null model as an initial guess.
Loss Function
Segmentation Loss. For the segmentation branch of our architecture, we use the
Generalized Dice Loss (GDL) function proposed by Sudre et al. [38]. This loss
function improves upon the original Dice loss proposed by Milletari et al. [30]
by adding a weighting term for each segmentation class. Adding weight terms
produced significantly better Dice scores for highly imbalanced segmentation
problems. The GDL loss is given by the following:
PC PN k k
k=1 wk i=1 Ti Pi
Lgdl = 1 − 2 PC PN ,
wk k 2 k 2
k=1 i=1 (Ti ) + (Pi )
where C denotes the number of segmentation classes, N denotes the total num-
ber of pixels (or voxels in the 3D case), Pik is the ith voxel in the predicted
segmentation mask for class k, and Tik is the ith voxel in the ground truth mask
for class k. The term wk is the weighting term for the k th class and is given by
!
C 1
wk = PC 1
,
k=1
N k
Nk
where Nk is the total number of pixels belonging to the class k over the en-
tire dataset. Note that the weights wk are pre-computed and remain constant
throughout training.
In our case, the number of segmentation classes equals two; background and
foreground. The computed class weights are approximately 0.003 and 1.997 for
the background and foreground classes, respectively.
Time-Lapse Gravity Inversion via Deep Learning 9
Repeat twice:
Conv3D(16, (3×3×3))
Add Add first and last resize layers Regression Batch Normalization
ReLU
Ends with:
Conv3D(1, (1×1×1))
Copy output of Decoder(16)
Repeat twice:
Conv3D(16, (3×3×3))
Convert 2D input to
Batch Normalization
Conv2D 3D cube with Segmentation
ReLU
Conv2D(128, (1×1))
Ends with:
Conv3D(1, (1×1×1))
Sigmoid
Table 2: Implementation details for each stage in our proposed U-Net architec-
ture. We first resize the 2D input and convert it to a 3D volume via pointwise
convolution. This volume is the input to a standard 3D U-Net described in the
center column. The details of autoencoder regularization are shown in the right-
most column.
Regression Loss. For the regression branch of our architecture, we use the mean
squared error loss function, which is given by
1 2
Lreg = kρT − ρP k2 ,
N
where ρT and ρP are the ground truth and predicted density maps, respectively.
Autoencoder Loss. For the autoencoder branch of our architecture, we, again,
use the mean squared error loss function. This loss is given by
1 2
Lae = kgT − gP k2 ,
N
where gT and gP are the ground truth and reconstructed surface gravity maps,
respectively.
10 A. Celaya et al.
Training and Testing Protocols We initialize the first layer in our modified
U-Net architecture with 16 feature maps and use the Adam optimizer [22]. The
initial learning rate is set at 0.001 with a cosine decay schedule with restarts [29].
During training, we use a batch size of eight. Random flips and additive Gaussian
noise are applied to each batch as data augmentation. To evaluate the validity
of our predicted inversions, we utilize the following metrics:
– Mean Squared Error (kg/m3 ) - The mean squared error in kg/m3 between
the predicted and true density anomalies.
– Mean Squared Error (µGal) - The mean squared error in µGal between the
input gravity map and the gravity response from the predicted anomaly.
– R-Squared - The R-squared coefficient between the predicted and true den-
sity anomalies.
– Dice Similarity Coefficient - A measure of overlap between the non-zero
masks of the predicted and true density anomalies. The Dice similarity co-
efficient (Dice) ranges from zero to one, where one represents a perfect pre-
diction [14].
Unless otherwise specified, all experiments utilize the same 90/10 train-test
split with 5% of the training set used as a validation set. This split results
in a training set with 450 gravity/plume pairs and a test set with 50 pairs.
Our models are implemented in Python using TensorFlow (v2.8.0) and trained
on four NVIDIA A100 GPUs [8]. We leave all other hyperparameters at their
default values.
At test time, our DL-based inversion produces predictions that are of size
128×128×128. To produce our final prediction, we resample this output to the
original grid resolution of 440×530×145 via linear interpolation.
4 Results
4.1 L2 Inversion
Using the methods described in Section 3.2, we invert the 50 surface gravity
maps in the test set described in 3.3. Table 3 reports the accuracy of these
inversions using the metrics described in Section 3.3. Figure 5 illustrates two
predicted CO2 plumes using L2 inversion.
Time-Lapse Gravity Inversion via Deep Learning 11
Percentile
Mean ± Std Median
25th 75th
MSE (kg/m3 ) 0.67 ± 0.39 0.72 0.35 0.93
MSE (µGal) 0.00 ± 0.00 0.00 0.00 0.00
R-Squared -0.25 ± 0.73 -0.01 -0.44 0.20
Dice 0.44 ± 0.06 0.45 0.39 0.48
Table 3: Accuracy of L2 inversion of surface gravity data.
Fig. 5: (Left) True CO2 plume. (Right) Predicted CO2 plume using contrained
L2 inversion.
12 A. Celaya et al.
Fig. 6: (Top left) Learning curves for segmentation loss. (Top right) Learning
curves for regression loss. (Bottom left) Learning curves for autoencoder loss.
(Bottom right) Learning curves for composite loss. All plots are plotted using
a logarithmic scale. In each case, we see convergence in both the training and
validation curves, which indicates that our DL approach successfully learned a
mapping from surface gravity to 3D subsurface CO2 plumes.
Time-Lapse Gravity Inversion via Deep Learning 13
To assess the accuracy of our trained DL model, we, again, invert the 50
surface gravity maps in our fixed test set and compute the metrics described
in Section 3.3 for the predicted CO2 plumes. These metrics are shown in Table
4. In Table 4 we see that our DL approach outperforms L2 inversion in terms
of plume misfit (i.e., MSE (kg/m3 ) and R-Squared) and plume geometry (i.e.,
Dice), but does not outperform L2 inversion in terms of data misfit (i.e., MSE
(µGal)). This latter underperformance can be explained by the fact that our DL
model does not attempt to fit the gravity response of the predicted plume to the
original data. Instead, our proposed DL approach directly fits the predicted CO2
plume to the true CO2 plume. Figure 7 illustrates two predicted CO2 plumes
using DL-based inversion.
Percentile
Mean ± Std Median
25th 75th
3
MSE (kg/m ) 0.26 ± 0.15 0.27 0.14 0.36
MSE (µGal) 0.47 ± 0.89 0.13 0.03 0.50
R-Squared 0.68 ± 0.17 0.73 0.62 0.80
Dice 0.78 ± 0.03 0.79 0.76 0.80
Table 4: Accuracy of deep learning-based inversion of surface gravity data. Here
we see that our proposed DL method outperforms L2 inversion in terms of CO2
plume error but does not match the gravity response of the predicted plume
with the observed data.
Fig. 7: (Left) True CO2 plume. (Right) Predicted CO2 plume using deep learning.
Percentile
Mean ± Std Median
25th 75th
3
MSE (kg/m ) 0.27 ± 0.16 0.26 0.14 0.35
MSE (µGal) 0.43 ± 1.12 0.07 0.01 0.24
R-Squared 0.69 ± 0.22 0.74 0.66 0.80
Dice 0.75 ± 0.05 0.76 0.73 0.78
Table 5: Accuracy of deep learning-based inversion of surface gravity data using
a five-fold cross validation.
Time-Lapse Gravity Inversion via Deep Learning 15
a large number of grid cells within the reservoir mask with small non-zero values
to fit the gravity response with the original data. This process produces non-
zero masks much larger than the true non-zero mask for the CO2 plume, which
degrades the Dice metric. As a postprocessing step, we can apply a threshold
to the combined L2-DL prediction to improve the Dice metric. We found via a
grid search that the optimal threshold is -7 kg/m3 . This threshold improves the
mean Dice coefficient from 0.41 to 0.62.
Figure 8 illustrates two predicted CO2 plumes using DL-based inversion.
Visually, there is little difference between the smaller plume shown at the top
of the figure and the original DL prediction, but our combined L2-DL method
appears to improve the prediction performance for larger CO2 plumes.
Percentile
Mean ± Std Median
25th 75th
3
MSE (kg/m ) 0.13 ± 0.09 0.11 0.07 0.17
MSE (µGal) 0.00 ± 0.00 0.00 0.00 0.00
R-Squared 0.89 ± 0.06 0.90 0.86 0.94
Dice 0.41 ± 0.09 0.43 0.38 0.47
Table 6: Accuracy of combined L2-Deep Learning approach for gravity inversion.
Our combined approach produces predictions whose gravity response matches
the original observation and further reduces the model misfit.
Fig. 8: (Left) True CO2 plume. (Right) Predicted CO2 plume using deep learning
prediction as an initial guess for L2 inversion.
Percentile
Mean ± Std Median
25th 75th
3
MSE (kg/m ) 0.26 ± 0.15 0.26 0.15 0.33
MSE (µGal) 0.64 ± 1.19 0.24 0.05 0.77
R-Squared 0.79 ± 0.07 0.81 0.78 0.83
Dice 0.77 ± 0.06 0.78 0.76 0.80
Table 7: Accuracy of hybrid training approach for gravity inversion. We generally
see similar results to our pure DL-driven approach to inversion. The only metric
that benefits is the R-Squared coefficient.
Percentile
Mean ± Std Median
25th 75th
MSE (kg/m3 ) 0.25 ± 0.15 0.25 0.14 0.32
MSE (µGal) 0.43 ± 0.85 0.13 0.02 0.45
R-Squared 0.80 ± 0.08 0.82 0.79 0.84
Dice 0.73 ± 0.07 0.75 0.72 0.76
Table 8: Accuracy of hybrid training approach for gravity inversion using transfer
learning with converged pure DL-driven model. We see that our data misfit
slightly improves (i.e., MSE (µGal)) compared to training our physics workflow
from scratch and our pure DL-based approach.
our time-dependent network using the same protocols described in Section 3.3.
Table 10 shows the results of training a time-dependent model. Here we see that
our time-dependent DL approach can successfully recover our test cases with
high accuracy in terms of model and data misfit.
18 A. Celaya et al.
Percentile
Mean ± Std Median
25th 75th
3
MSE (kg/m ) 0.10 ± 0.04 0.12 0.06 0.13
MSE (µGal) 0.04 ± 0.03 0.03 0.01 0.05
R-Squared 0.92 ± 0.01 0.93 0.92 0.93
Dice 0.94 ± 0.01 0.95 0.94 0.95
Table 10: Results of DL-based inversion with time-dependence. Here we see that
our time-dependent DL approach can successfully recover our test cases with
high accuracy in terms of model and data misfit.
method, we change our target from density to saturation. We train our DL-based
inversion using 2D surface gravity maps (500m sensor resolution) as input and
3D subsurface saturation as our target. The results of this inversion are shown
in Table 12, where we see that our DL-based approach yields similar results
to inversion with changes in density as the target. This similar performance
shows that DL-based inversion can map changes in surface gravity to multiple
subsurface physical property models.
Percentile
Mean ± Std Median th
25 75th
MSE (su) 6.3e-05 ± 3.3e-05 7.4e-05 3.3e-05 8.5e-05
R-Squared 0.70 ± 0.16 0.76 0.68 0.80
Dice 0.59 ± 0.05 0.59 0.56 0.62
Table 12: Accuracy of DL-based inversion of surface gravity data with 3D satu-
ration models as the target. We see that our DL-based approach yields similar
results to inversion with changes in density as the target. This similar perfor-
mance shows that DL-based inversion can map changes in surface gravity to
multiple subsurface physical property models.
5 Discussion
The above introduced results indicate that our DL-based method provides an
effective approach for the inversion of surface gravity data. Particularly, the DL-
driven approach successfully detects and recovers a variety of realistic-looking
plumes that vary significantly in shape and exhibit non-trivial density distribu-
tions. Our DL-driven inversion demonstrates improvements over the widely used
conventional L2 inversion in the following aspects: Our custom U-Net success-
fully inverts surface gravity data in a completely unconstrained environment.
20 A. Celaya et al.
networks (GANs) may also help extend our DL-based inversion’s generalization
power. Yen et al. use GANs to enrich their training data in [40] and improve the
performance of their segmentation model for seismic salt segmentation. Transfer
learning presents an additional mitigation strategy for the lack of diversity in a
training dataset. Using a model trained on the Johansen formation, we can use
the weights of this model as an initial guess to train a model on a much smaller
training set from another source. El Zini et al. and Yen et al. use transfer learn-
ing in their work to improve the performance of their DL models and improve
their generalization power [16], [40]. Transfer learning achieves faster conver-
gence compared to random initialization of networks [18]. This speed-up may
also mitigate the computational bottlenecks associated with DL-based inversion
by dramatically decreasing the time required to train a model.
A recent study by Colombo et al. suggests a practical hybrid DL-aided geo-
physical inversion approach that iteratively minimizes model and data misfit [11].
This hybrid approach benefits from data-driven DL-inversion and physics-based
inversion (i.e., least squares optimization). Physics-informed DL inversion is
emerging as a viable approach for practical geophysical inversion [39]. How-
ever, the similarity between our physics-driven workflow and pure DL-driven
results indicates that surface gravity inversion may not benefit from a hybrid
approach. One possible explanation for this similarity is that gravity inversion
is an ill-posed problem. Good density model fit does not necessarily correspond
to good data fit and vice versa. During training, our workflow can converge in
model misfit but does not converge in our forward model loss. The computa-
tional cost of appending our forward gravity model to our DL model during
training increases the time required for convergence. Without transfer learning,
our physics-driven workflow takes roughly four days to converge. With transfer
learning, that time decreases to eight hours. In either case, we see a considerable
slow down compared to our pure DL-based approach, which takes about three
hours to converge. Improving our proposed physics-driven workflow in terms of
accuracy and computational cost is a topic of ongoing and future work.
Our time-dependent results indicate that DL-based inversion can successfully
incorporate time-series gravity data for monitoring sequestered CO2 . However,
our results are only tested on a single simulation of the Johansen formation.
Further testing of our time-dependent method on multiple simulations of the
same formation is required before we can make rigorous conclusions about the
accuracy of our proposed method. From a computational point of view, adding
our LSTM network to the training pipeline does not significantly affect the time
or memory required for training our network.
As mentioned earlier, the benefits of DL-based inversion are limited from a
computational perspective. DL-based inversion requires a large amount of train-
ing data (simulated, recorded, or both) and computing infrastructure that can
support training DL models. In the absence of both, conventional inversion meth-
ods like constrained L2 inversion make more sense for practitioners. However,
suppose users have the required data and computing infrastructure and are will-
ing to pay the upfront costs of data generation and DL training. In that case, the
22 A. Celaya et al.
benefits of DL-based inversion are clear. Our results indicate that our pure DL-
driven and combined L2-DL approaches produce more accurate density model
reconstructions from surface gravity data. While the results from our physics-
driven workflow and time-dependent training are promising, further work is re-
quired to improve the computational cost (i.e., physics-driven workflow) or test
the method’s robustness (i.e., time-dependent training).
6 Conclusions
References
1. Machine Learning Method to Determine Salt Structures from Gravity Data, SPE
Annual Technical Conference and Exhibition, vol. Day 4 Thu, October 29, 2020 (10
2020). https://doi.org/10.2118/201424-MS, https://doi.org/10.2118/201424-MS,
d041S046R007
2. Adler, A., Araya-Polo, M., Poggio, T.: Deep learning for seismic inverse problems:
Toward the acceleration of geophysical analysis workflows. IEEE Signal Processing
Magazine 38(2), 89–119 (2021). https://doi.org/10.1109/MSP.2020.3037429
3. Alyousuf, T., Li, Y., Krahenbuhl, R.: Machine learning inversion of time-lapse
three-axis borehole gravity data for CO2 monitoring, pp. 3099–3103 (2022).
https://doi.org/10.1190/image2022-3745388.1, https://library.seg.org/doi/abs/10.
1190/image2022-3745388.1
4. Araya-Polo, M., Jennings, J., Adler, A., Dahlke, T.: Deep-learning tomography.
The Leading Edge 37(1), 58–66 (2018). https://doi.org/10.1190/tle37010058.1,
https://doi.org/10.1190/tle37010058.1
5. Bakas, S., et al.: Identifying the best machine learning algorithms for brain tumor
segmentation, progression assessment, and overall survival prediction in the brats
challenge. arXiv preprint arXiv:1811.02629 (2018)
6. Boulanger, O., Chouteau, M.: Constraints in 3d gravity inversion. Geo-
physical Prospecting 49(2), 265 – 280 (2001). https://doi.org/10.1046/j.1365-
2478.2001.00254.x
Time-Lapse Gravity Inversion via Deep Learning 23
7. Chen, J., Schiek-Stewart, C., Lu, L., Witte, S., Eres Guardia, K., Menapace, F.,
Devarakota, P., Sidahmed, M.: Machine learning method to determine salt struc-
tures from gravity data. In: SPE Annual Technical Conference and Exhibition.
OnePetro (2020)
8. Chollet, F., et al.: Keras (2015), https://www.keras.io/, software available from
keras.org
9. Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3d u-net:
learning dense volumetric segmentation from sparse annotation. In: International
conference on medical image computing and computer-assisted intervention. pp.
424–432. Springer (2016)
10. Colombo, D., Li, W., Sandoval-Curiel, E., McNeice, G.W.: Deep-learning elec-
tromagnetic monitoring coupled to fluid flow simulators. GEOPHYSICS 85(4),
WA1–WA12 (2020). https://doi.org/10.1190/geo2019-0428.1, https://doi.org/10.
1190/geo2019-0428.1
11. Colombo, D., Turkoglu, E., Li, W., Sandoval-Curiel, E., Rovetta, D.: Physics-
driven deep-learning inversion with application to transient electromagnet-
ics. GEOPHYSICS 86(3), E209–E224 (2021). https://doi.org/10.1190/geo2020-
0760.1, https://doi.org/10.1190/geo2020-0760.1
12. De Groot-Hedlin, C., Constable, S.: Occam’s inversion to generate smooth, two-
dimensional models from magnetotelluric data. Geophysics 55, 1613–1624 (12
1990). https://doi.org/10.1190/1.1442813
13. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Ima-
genet: A large-scale hierarchical image database. In: 2009 IEEE Confer-
ence on Computer Vision and Pattern Recognition. pp. 248–255 (2009).
https://doi.org/10.1109/CVPR.2009.5206848
14. Dice, L.R.: Measures of the amount of ecologic association between species. Ecol-
ogy 26(3), 297–302 (1945). https://doi.org/10.2307/1932409, https://esajournals.
onlinelibrary.wiley.com/doi/abs/10.2307/1932409
15. Eigestad, G., Dahle, H., B., H., Riis, F., Johansen, W., Øian, E.: Geological mod-
eling and simulation of co2 injection in the johansen formation https://www.sintef.
no/projectweb/matmora/downloads/johansen/
16. El Zini, J., Rizk, Y., Awad, M.: A deep transfer learning framework
for seismic data analysis: A case study on bright spot detection. IEEE
Transactions on Geoscience and Remote Sensing 58(5), 3202–3212 (2020).
https://doi.org/10.1109/TGRS.2019.2950888
17. Gasperikova, E., Appriou, D., Bonneville, A., Feng, Z., Huang, L., Gao, K.,
Yang, X., Daley, T.: Sensitivity of geophysical techniques for monitoring sec-
ondary co2 storage plumes. International Journal of Greenhouse Gas Control
114, 103585 (2022). https://doi.org/https://doi.org/10.1016/j.ijggc.2022.103585,
https://www.sciencedirect.com/science/article/pii/S175058362200007X
18. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. Adaptive computation
and machine learning, MIT Press (2016), https://books.google.co.in/books?id=
Np9SDQAAQBAJ
19. Jin, K.H., McCann, M.T., Froustey, E., Unser, M.: Deep convolutional neural net-
work for inverse problems in imaging. IEEE Transactions on Image Processing
26(9), 4509 – 4522 (2017). https://doi.org/10.1109/TIP.2017.2713099, cited by:
1004
20. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative
adversarial networks. In: Proceedings of the IEEE/CVF conference on computer
vision and pattern recognition. pp. 4401–4410 (2019)
24 A. Celaya et al.
21. Kim, Y., Nakata, N.: Geophysical inversion versus machine learn-
ing in inverse problems. The Leading Edge 37(12), 894–901 (2018).
https://doi.org/10.1190/tle37120894.1, https://doi.org/10.1190/tle37120894.1
22. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980 (2014)
23. Li, S., Liu, B., Ren, Y., Chen, Y., Yang, S., Wang, Y., Jiang, P.: Deep-learning
inversion of seismic data. IEEE Transactions on Geoscience and Remote Sensing
58(3), 2135–2149 (2020). https://doi.org/10.1109/TGRS.2019.2953473
24. Li, Y., Oldenburg, D.W.: 3-d inversion of gravity data. GEOPHYSICS 63(1), 109–
119 (1998). https://doi.org/10.1190/1.1444302, https://doi.org/10.1190/1.1444302
25. Lie, K.A.: An Introduction to Reservoir Simulation Using MATLAB/GNU Octave:
User Guide for the MATLAB Reservoir Simulation Toolbox (MRST). Cambridge
University Press (2019). https://doi.org/10.1017/9781108591416
26. Liu, B., Guo, Q., Li, S., Liu, B., Ren, Y., Pang, Y., Guo, X., Liu,
L., Jiang, P.: Deep learning inversion of electrical resistivity data. IEEE
Transactions on Geoscience and Remote Sensing 58(8), 5715–5728 (2020).
https://doi.org/10.1109/TGRS.2020.2969040
27. Liu, B., Guo, Q., Li, S., Liu, B., Ren, Y., Pang, Y., Guo, X., Liu,
L., Jiang, P.: Deep learning inversion of electrical resistivity data. IEEE
Transactions on Geoscience and Remote Sensing 58(8), 5715–5728 (2020).
https://doi.org/10.1109/TGRS.2020.2969040
28. Lorentzen, R.J., Luo, X., Bhakta, T., Valestrand, R.: History Matching the
Full Norne Field Model Using Seismic and Production Data. SPE Journal
24(04), 1452–1467 (01 2019). https://doi.org/10.2118/194205-PA, https://doi.org/
10.2118/194205-PA
29. Loshchilov, I., Hutter, F.: SGDR: Stochastic gradient descent with warm
restarts. In: International Conference on Learning Representations (2017), https:
//openreview.net/forum?id=Skq89Scxx
30. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neu-
ral networks for volumetric medical image segmentation. In: 2016 Fourth
International Conference on 3D Vision (3DV). pp. 565–571 (2016).
https://doi.org/10.1109/3DV.2016.79
31. Myronenko, A.: 3d mri brain tumor segmentation using autoencoder regularization.
In: BrainLes@MICCAI (2018)
32. Nabighian, M., Grauch, V., Hansen, R., LaFehr, T., Li, Y., Peirce, J., Phillips, J.,
Ruder, M.: 75th anniversary. The historical development of the magnetic method
in exploration: Geophysics 70(6) (2005)
33. Oh, S., Noh, K., Seol, S.J., Byun, J.: Cooperative deep learning inversion of
controlled-source electromagnetic data for salt delineationcooperative dl inversion.
Geophysics 85(4), E121–E137 (2020)
34. Rezaie, M., Moradzadeh, A., Kalateh, A.N.: Fast 3d inversion of
gravity data using solution space priorconditioned lanczos bidi-
agonalization. Journal of Applied Geophysics 136, 42–50 (2017).
https://doi.org/https://doi.org/10.1016/j.jappgeo.2016.10.019, https://www.
sciencedirect.com/science/article/pii/S0926985116304360
35. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomed-
ical image segmentation. CoRR abs / 1505.04597 (2015), http://arxiv.org/abs/
1505.04597
36. Settgast, R.R., White, J.A., Corbett, B.C., Vargas, A., Sherman, C., Fu, P., An-
navarapu, C.: Geosx simulation framework, version 00 (3 2018), https://www.osti.
gov//servlets/purl/1422506
Time-Lapse Gravity Inversion via Deep Learning 25
37. Shahriari, M., Pardo, D., Picón, A., Galdran, A., Del Ser, J., Torres-Verdı́n, C.:
A deep learning approach to the inversion of borehole resistivity measurements.
Computational Geosciences 24(3), 971–994 (2020)
38. Sudre, C.H., et al.: Generalised dice overlap as a deep learning loss function for
highly unbalanced segmentations. In: Deep Learning in Medical Image Analysis
and Multimodal Learning for Clinical Decision Support. pp. 240–248. Springer
International Publishing (2017)
39. Sun, J., Innanen, K.A., Huang, C.: Physics-guided deep learning for seismic
inversion with hybrid training and uncertainty analysis. GEOPHYSICS 86(3),
R303–R317 (2021). https://doi.org/10.1190/geo2020-0312.1, https://doi.org/10.
1190/geo2020-0312.1
40. Sun, Y., Araya-Polo, M., Williamson, P.: Data characterization and trans-
fer learning for DL-driven velocity model building, pp. 1475–1479 (2021).
https://doi.org/10.1190/segam2021-3594467.1, https://library.seg.org/doi/abs/
10.1190/segam2021-3594467.1
41. Sun, Y., Denel, B., Daril, N., Evano, L., Williamson, P., Araya-Polo, M.: Deep
learning joint inversion of seismic and electromagnetic data for salt reconstruc-
tion, pp. 550–554 (2020). https://doi.org/10.1190/segam2020-3426925.1, https:
//library.seg.org/doi/abs/10.1190/segam2020-3426925.1
42. Um, E.S., Alumbaugh, D., Commer, M., Feng, S., Gasperikova, E., Li, Y., Lin,
Y., Samarasinghe, S.: Deep-learning multiphysics network for imaging co2 sat-
uration and estimating uncertainty in geological carbon storage. Geophysical
Prospecting n/a(n/a). https://doi.org/https://doi.org/10.1111/1365-2478.13257,
https://onlinelibrary.wiley.com/doi/abs/10.1111/1365-2478.13257
43. Yang, F., Ma, J.: Deep-learning inversion: A next-generation seismic ve-
locity model building method. GEOPHYSICS 84(4), R583–R599 (2019).
https://doi.org/10.1190/geo2018-0249.1, https://doi.org/10.1190/geo2018-0249.1
44. Yang, Q., Hu, X., Liu, S., Jie, Q., Wang, H., Chen, Q.: 3-d gravity inversion based
on deep convolution neural networks. IEEE geoscience and remote sensing letters
19, 1–5 (2022)
45. Yang, X., Chen, X., Smith, M.M.: Deep learning inversion of gravity data for
detection of co2 plumes in overlying aquifers. Journal of Applied Geophysics 196
(11 2021). https://doi.org/10.1016/j.jappgeo.2021.104507
46. Yu-Feng, W., Yu-Jie, Z., Li-Hua, F., Hong-Wei, L.: Three-dimensional gravity in-
version based on 3d u-net++. Applied Geophysics 18(4), 451–460 (2021)