UtomoPaper 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/363158099

A data-driven approach for optimizing horizontal well placement in thin oil


rim reservoirs using deep learning

Article in Journal of the Japanese Association for Petroleum Technology · January 2022
DOI: 10.3720/japt.87.52

CITATIONS READS

0 40

3 authors:

Utomo Pratama Iskandar Kazuki Abe


Ministry of Energy and Mineral Resources Waseda University
18 PUBLICATIONS 48 CITATIONS 1 PUBLICATION 0 CITATIONS

SEE PROFILE SEE PROFILE

Masanori Kurihara
Waseda University
51 PUBLICATIONS 1,516 CITATIONS

SEE PROFILE

All content following this page was uploaded by Utomo Pratama Iskandar on 31 August 2022.

The user has requested enhancement of the downloaded file.


52

石油技術協会誌 第 87 巻 第 1 号 (令和 4 年 1 月)52 ∼ 68 頁


Journal of the Japanese Association for Petroleum Technology
Vol. 87, No. 1(Jan., 2022)pp. 52∼68

論 文
Original Article

A data-driven approach for optimizing horizontal well placement


in thin oil rim reservoirs using deep learning
Utomo Pratama Iskandar , Kazuki Abe and Masanori Kurihara
*,
**,
† * *

(Received August 28, 2021;accepted January 4, 2022)

Abstract: It is challenging to develop thin oil rim reservoirs economically using conventional wells. Horizontal
wells are now widely used to overcome the shortcomings of vertical wells. The deciding factor in ensuring successful
horizontal wells application is optimum well placement. However, the conventional optimization approach is time and
resource-intensive. A data-driven approach was proposed to optimize heel and toe locations by deploying a deep learning
model. A synthetic database comprised of nine fundamental parameters that influence recovery mechanisms in thin oil
reservoirs was generated to train the model. The accuracy and computation time of a deep-learning model trained on a
synthetic database were compared to a novel optimization method that combines a genetic algorithm and a particle swarm
optimization(hybrid GA-PSO)algorithm. The deep-learning model predicted optimum well placement(heel and toe
points)with an accuracy comparable to the hybrid GA-PSO algorithm. Furthermore, the prediction obtained by the deep-
learning model takes significantly less computation time than the hybrid GA-PSO algorithm. The developed optimization
method offers a rapid and reliable initial guess of well placement for detailed optimization by simulation. The developed
model is universally applicable for various thin oil rim characteristics, especially in the scarcity of data to build a reliable
reservoir model.

Keywords: thin oil rim, data-driven, deep learning, horizontal well, optimization, well placement, hybrid GA-PSO

Wicaksana, 2000)
. Such reservoirs are common in Indonesia
and less developed due to technical and business challenges.
1 . INTRODUCTION
With the recent advances in drilling technology and an
The typical characteristics of thin oil rim reser voirs are increased drilling cost per well, horizontal wells are now
underlain by aquifer and overlain by a gas cap with the widely used to overcome the shor tcomings of ver tical
thickness of oil column ranging from less than 30 up to 90 feet, wells. They can produce oil at higher capacity at the same
and it may contain a large volume of oil(Masoudi, Karkooti, production rate and operate at lower drawdown pressure
and Othman, 2013)
. In Indonesia, thin oil rim reservoirs are (Onwukwe, Obah, and Chukwu, 2012)
. Besides, pressure
located in the Poleng field and Ujung Pangkah field in the distribution along the horizontal wells tends to be uniform.
East Java basin and three fields in Mahakam delta, offshore As a result, horizontal wells significantly reduce the coning
East Kalimantan. These fields have implemented horizontal phenomenon in the ver tical wells, preser ving reser voir
wells except Ujung Pangkah field, with 12 wells and 50 wells energy.
in the Poleng field and fields in Mahakam delta, respectively The deciding factors to ensure successful horizontal
(Cahyono and Felder, 2010; Forrest, Sukmana, Suhana, wells application include well placement, trajectory, pattern,
and Asjhari, 2005; Vo, War yan, Dharmawan, Susilo, and spacing, length, production rate, and well-completion methods
(Aladeitan, Arinkoola, Udebhulu, and Ogbe, 2019; Chan,
Masoudi, Karkooti, Shaedin, and Othman, 2014; Cho, 2001;
Jaoua and Rafiee, 2019)
. Among these factors, well placement

早稲田大学理工学術院創造理工学部環境資源工学科 Department of
Resources and Environmental Engineering, School of Creative Science
and Engineering, Faculty of Science and Engineering, Waseda University, is the most critical aspect of production strategy(Nakajima
Japan. and Schiozer, 2003)
. This is because well placement controls
**
インドネシア共和国,エネルギー鉱物資源省,石油・ガス技術研究
開発センター " レミガス " LEMIGAS Research and Development Center the movement of oil-water, and gas-oil contacts and contacts
for Oil and Gas Technology, Ministry of Energy and Mineral Resources,
are influenced by the pressure drawdown profile along the
Republic of Indonesia.

Corresponding author:E-Mail:utomo@ruri.waseda.jp horizontal wellbore. Therefore, an optimal well placement that

Copyright © 2022, JAPT


Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 53

maximizes drainage area and avoids drastic migration of oil is model. For this reason, a synthetic database and a deep
the key success of thin oil rim development. learning model were constructed. The database was used
Optimization of horizontal well placement is an arduous to train and validate the deep learning model represented
task due to the broad possibilities of solution space as distinct by a sequential type of neural network. The developed deep
from ver tical well. The process is time-consuming and learning model was deployed to predict the optimum heel and
requires high computational demands because the number of toe horizontal well locations in x–z(or(i, k))coordinates. The
decision variables in horizontal well is significantly increased prediction results generated by the model are encouraging
compared to vertical well, involving the locations of heel and in good agreement with the true solutions. In addition, a
and toe in(i, j, k)coordinates and its performance is also conventional optimization approach using a Hybrid GA-SPO
attributed to reservoir-related parameters such as fluid and was presented. The solution and efficiency were compared
rock properties(Nakajima and Schiozer, 2003; Zandvliet, with the data-driven approach. The model developed offers
Handels, van Essen, Brouwer, and Jansen, 2008)
. Moreover, a universal optimization capability for various thin oil rim
the objective function is highly nonlinear and not smooth characteristics, especially in the scarcity of data. Furthermore,
because it involves integers as decision variables(Nasrabadi, the proposed approach can reduce the considerable
Morales, and Zhu, 2012)
. The cur rent advances and computational time for ever y thin oil rim identified and
development of optimization techniques are primarily focused applicable where oil rim development is desirable.
on exploring ways of minimizing the number of objective
2. METHOD
function evaluations during the optimization process(Badru
and Kabir, 2003)
. Though some optimization methods such 2.1 Reser voir Model
as particle swarm optimization(PSO)have fewer simulation A cross-sectional synthetic reservoir model was built using
runs than exhaustive search, such techniques are still time Computer Modelling Group Ltd.(CMG)-IMEX black oil
and resource-intensive. simulator. The model was perpendicular to the oil rim with
The development of artificial intelligence(AI)has been oil column thickness varying corresponding to the parameter
progressing rapidly in recent years. Par ticularly deep combinations, and was assumed as homogeneous and
learning, a subset of machine learning, has been applied to anisotropic. To minimize the simulation runtime and allow a
solve various oil and gas industr y problems(Ampomah et more profound understanding of the model, it was assumed
al., 2016; Islam et al., 2020; Jang, Oh, Kim, Park, and Kang, that:(i)Flow across the outer boundary never occurred;(ii)
2018)
. The foundation of deep learning models is based on capillar y pressure is ignored; and(iii)frictional pressure
neural networks. There are distinct types of neural networks losses in the horizontal wellbore are insignificant.
to their specific applications, for instance, neural networks for
regression and convolutional neural networks for computer
Table 1 Reservoir properties of thin oil rim
vision. To develop a neural network model, a dataset, a
structured collection of data, that comprises a number of Reservoir Properties Value Unit
labeled observations, should be provided. Besides the amount Temperature 180 ℉
of data, the network s accuracy is mainly determined by the Oil density 53.0107 lb/ft³
Gas density/gravity 0.8
dataset used for training, particularly the representation of
Water phase density 64.0 lb/ft³
the data. It cannot extrapolate beyond the range accurately. Viscosity pressure dependence 4.00E − 05 cP/psi
Thus, the dataset must cover the range of inputs for which Formation vol factor 1.02998
the network will be used. This data-driven approach contrasts Compressibility 3.23E − 06 1/psi
with teaching a computer with a massive list of rules to solve Reference Pressure 14.696 psi
the problem. Instead, the neural network maps the underlying Viscosity 0.369 cP
Initial Reservoir Pressure 3500 psi
function to create a relationship between the input and
output. Physics and geology are represented by data or field
measurements. The model is expected to be able to evaluate
the obser vations and modify itself when it makes errors.
Table 2 Operational constraints
Therefore, the model is benefited as the number of data and
field measurements increase over time. As a result, it would Parameters Value Unit
be able to solve more complex problems. Min BHP 1,500 psi
Max WC 0.95
This paper presents a novel approach for quick and robust
Max GOR 10,000 scf/bbl
decision-making for optimizing horizontal well placement
Qo 500 bbl/day
using a data-driven approach by employing a deep learning

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


54 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

Fig. 1 Initial well placement

The default grid block size is 5 ft × 2000 ft × 5 ft, and gas between various reser voir proper ties and the horizontal
oil contact(free oil level)is located at a depth of 8000 ft. The well placement. For this reason, a synthetic database was
reser voir properties were designed to have typical thin oil constructed encompassing a wide array of typical oil rim
rim reservoirs, as shown in Table 1(Olamigoke and Peacock, characteristics. Besides training the model, it was also used to
2009)
. validate the model.
The field development strategy used is a sequential Multiple reservoir models for the synthetic database were
development where oil is firstly produced and followed by gas generated from a program developed using phyton to ensure
production(Masoudi et al., 2013)
. This operation prioritizes the consistency of parameter combinations. The program
oil withdrawal from the oil column with no deliberation modified the IMEX input dataset corresponding to the pre-
production from the gas cap. The simulation run time is three determined parameters. These parameters(Table 3)were
years with the production constraint, as seen in Table 2. selected due to their influence on the recovery process with
Initial well placements were assigned to the center of the the typical values for oil rim reservoir(Aladeitan et al., 2019;
oil column(Fig. 1); thus, the offset of both water oil contact John et al., 2019; Nicot and Duncan, 2012; Olamigoke and
(WOC)and gas oil contact(GOC)is maximized. The Peacock, 2009; Vo et al., 2000; Wagenhofer and Hatzignatiou,
horizontal well length was drilled up to the reservoir length to 1996). In general, they represent the ef fect of reser voir
maximize drainage area and enhance the volumetric sweep of structure(e.g., reservoir dip)
, fluid flow dynamics(e.g., oil
the oil rim. , and energy drive(e.g., the volume ratio of water
viscosity)
A survey showed that this configuration, a general strategy and oil(Vw/Vo)).
such as placing the well in the center of the oil column and A diverse gas cap and aquifer size found in the oil rim are
extending the well section up to the reser voir length, is represented by the ratio of gas volume and oil volume(Vg/
commonly used in the industr y(Cho, 2001)
. These well Vo)and the ratio of water volume and oil volume(Vw/Vo)
.
configurations may result in sub-optimal solutions. However, Thus, a volume modifier multiplier method using VOLMOD
a well-defined initial population, decision variables for the and assignment of inactive grid cell using NULL GRID were
optimization(e.g.(i, j, k)coordinates)
, is essential and may applied to create various combinations of Vg/Vo and Vw/Vo.
improve the optimization performance. Poor initial population In total, there are 5832 reservoir models generated, each
definition leads to more iterations. Afterward, the initial well with unique properties according to Table 3. Subsequently,
placement will be optimized when building the synthetic they were optimized with CMG CMOST using random
database. brute force method to determine the optimum horizontal
2.2 Synthetic Database well placements. The decision variables for the optimization
A representative and suf ficient data is required to include the starting point(heel point)of the well and the
teach the deep learning model capturing the relationship ending point(toe point)of the well, both in(i, k)coordinates.

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 55

Table 3 Parameters for creating multiple reservoir models

Parameter Low Medium High


Reservoir thickness h(ft) 50 100 200
Oil column thickness hoil(ft) 25 50 75
Reservoir dip dip(° ) 5 15 30
Ratio of vertical to
kv/kh 0.01 0.1 0.5
horizontal permeability
Gas viscosity μg(cp) 0.01 0.025 0.05
Oil viscosity μo(cp) 0.5 2 4
Ratio of gas volume to oil
Vg/Vo 3 – 5
volume
Ratio of water volume to
Vw/Vo 3 – 5
oil volume
Oil specific gravity γo 0.7 – 0.85

Table 4 Cost components for calculating NPV of deep learning model used because there is a vast selection
of artificial neural network(ANN)types. The problem in
Cost Components Value
Drilling cost 5.0 × 106($) this paper is identified as a regression type for predicting
Oil Price 50($/STB) numerical values. The most suitable type of deep learning
Water Treatment Cost 5($/STB) model for this problem is multilayer perceptrons(MLPs)
Gas Processing Cost 15($/STB) with multiple inputs and multiple outputs. Several advantages
Discount Factor 10% of MLPs include support for multivariate inputs and outputs,
robust to noise, and the ability to learn linear and nonlinear
relationships(Gardner and Dorling, 1998)
. Keras librar y
Since the toe point is dependent on the heel point, a special with TensorFlow as the backend was used to build the deep
function was created in CMOST so that the well trajectory is learning model due to easy implementation, fast prototyping,
smooth and does not exceed the optimization domain. and Scikit learn integration. This allows spending more time
The objective function for this optimization problem is tuning the structure and topology of the neural network.
. The cash flow for NPV
defined as net present value(NPV) The total dataset consists of 5832 reser voir models
mainly consists of construction cost, operating cost, and oil with a vector of nine features and four targets(Table 5)
.
price(Table 4)
. The NPV is computed by subtracting the The features and targets comprise integer and continuous
discounted drilling costs, water and gas production costs from variables where the only integer type is heel and toe points in
the discounted total oil revenues. (i, k)coordinates. Since the target represents well locations
Finally, an exhaustive search method was performed to in various reser voir structures, it had to be normalized
optimize well placements of 5832 realizations. The goal is to corresponding to its oil column, top depth, and dip.
find the true global optimum of heel and toe points. 2.3.1 Model Development Process
2.3 Deep Learning Model Several steps were undertaken to construct a skillful deep
A proper problem definition is essential to identify the type learning model, as illustrated in the flowchart(Fig. 2)
. The

Table 5 Features and targets of deep learning

No. Features Targets


1 Reservoir thickness h(ft) Heel-i
2 Oil column thickness hoil(ft) Heel-k
3 Reservoir dip dip(° ) Toe-i
4 Ratio of vertical to horizontal permeability kv/kh Toe-k
5 Gas viscosity μg(cp)
6 Oil viscosity μo(cp)
7 Ratio of gas volume to oil volume Vg/Vo
8 Ratio of water volume to oil volume Vw/Vo
9 Oil specific gravity γo

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


56 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

main objectives are to increase accuracy, learn faster, and overfitting, a dropout will be implemented. It is one technique
make better predictions. First, a baseline model was created to reduce overfitting by randomly selecting neurons to be
to obtain a decent model as fast as possible to make baseline ignored during the training with a given probability. The
predictions. This default starting point(Table 6)may not selected neurons will be temporarily removed so that they
produce the best possible model as the model structure and will not contribute to the feed-forward calculation. As a result,
topology were configured based on the rule of thumb in the the network becomes less sensitive to the specific weights of
machine learning application. The number of hidden layers neurons, and it forces the network to spread out its learning.
and neurons is relatively small. The activation function was set At this step, error and the standard deviation is expected
, and hyperparameters were not optimized.
to default(linear) to reduce significantly. In addition, since training is time-
The next step involves improving the topology of the consuming, the model architecture and weights were saved at
baseline model. The biggest leverage is by examining deeper each model improvement performed for faster loading.
and/or wider topology. This can be achieved by increasing the After ward, the model was tuned for finer configurations
number of hidden layers and the number of neurons gradually. that include optimizers, weight initialization method, learning
Table 7 shows various configurations for the number of rates, number of epochs, and batches. The aim is to boost
hidden layers and neurons to improve the topology of the the performance of the topology model and reduce further
baseline model. Appropriate activation functions were also training time. These hyperparameters are the parameters
selected corresponding to the problem type. The activation that are not learned and are fixed values inside the model
function plays a vital role in governing the threshold when equations. Grid search was used to evaluate dif ferent
the neuron is activated and the strength of the output signal combinations by providing a list of configurations(Table 8)
.
(Leshno, Lin, Pinkus, and Schocken, 1993)
. Therefore, the In total, there are 243 combinations of hyperparameters to
higher the value of the activation function is for the neuron, search.
the more impact this neuron has to the network. At the end, the deep learning model was finalized by
The result of topology tuning will most likely result loading the best topology and configurations. Ultimately, the
in a larger and more complex structure. In case there is model was deployed to predict the output of the new dataset.
2.3.2 Model Construction Workflow
Each model was constructed following the steps in the
flowchart(Fig. 3)
. First, it started with the model definition,
that sequential layers were added. Then, a standardization
feature scaling(Eq. 1)was applied to the nine parameters
of the oil rim since the features of this problem var y, and

Table 6 Baseline architecture

Parameters Value
No. of Hidden Layers 1
No. of Neurons 16
No. of Epochs 50
Batch Size 128
Activation Function Linear
Optimizers Adam
Initializers Uniform
Learning Rates 0.1

Table 7 Parameters for topology tuning

No. of Hidden Layers No. of Neurons


2 16
3 32
4 64
5 86
Fig. 2 Flowchart of deep learning model development 6 128

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 57

Table 8 Hyperparameters for grid search

No. of Epochs Batch Size Optimizers Initializers Learning Rates


50 128 Adam uniform 0.1
100 256 Adagrad he_uniform 0.01
150 512 RMSprop glorot_uniform 0.001

their magnitude is dif ferent. A feature scaling(data pre- selected as the gradient descent algorithm(Charniak, 2019)
.
processing)helps to accelerate the ANN learning process to The loss function defines the error between prediction and
converge faster. Besides that, feature scaling helps in noise target. The choice of loss function depends on the activation
reduction and enabling smoother relationships in mapping the function used at the output layer and the output type. For this
input and output of the network. (Eq. 2)was
type of problem, mean squared error(MSE)
chosen:
x −μ
z= (1)
σ 1 n
MSE =
n
Σ(Y^
−Y )
i =1
i i
2
(2)
Where μ is the mean and σ is the standard deviation from
the mean. where n is the number of samples, Yi is target values and Ŷi is
After that, the dataset was split and shuffled into training predicted values.
and test sets with 80% and 20% division, respectively. There After the model was compiled, it was ready for training by
is no specific rule for this division because it depends on the calling the fit function on the model. Fit function trained the
dataset size and algorithm used, but 80 to 20 proportion is one model using the provided dataset, and in the process, weights
common practice in machine learning(Nguyen et al., 2021). were adjusted to achieve better accuracy. First, the network
Compiling the model is applying stochastic gradient descent weights and biases were initialized using uniform and followed
on the whole ANN. This process involves specifying a gradient by a backpropagation process where the error propagated
descent algorithm and a loss function. In this case, Adam was backward from the output layer to the input layer. The weights
were updated based on this propagated error. The objective
is to find the best weights for the network that minimize the
loss function; hence the accuracy is improving. However,
the cost of performing this on the deeper neural network
becomes increasingly significant. The training continued for
determined epochs, where one epoch is one iteration over all
the training set.
Before deploying the deep learning model for making
a prediction, its generalization per formace needs to be
evaluated. The indicators are that the model is consistent
and capable of making accurate predictions across a broad
variety of input data, including unseen data. For this purpose,
K-fold cross-validation was used. Cross-validation is the gold
standard to evaluate a machine learning model(Ghosh,
Stephenson, Nguyen, Deshpande, and Broderick, 2020)
. It
provides a robust estimate of model performance on unseen
dataset and gives a less biased estimate of the performance.
This method utilizes all training set to evaluate the model. The
performance will be evaluated based on the number of folds
by averaging the errors and computing the standard deviation
of the error. For this case, 5-fold cross-validation was used.
The final step is to use the model to generate predictions
on new dataset. Predictions were made by providing the input
to the network and performing a feed forward calculation to
Fig. 3 Flowchart of individual model development generate an output.

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


58 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

2.4 Hybrid GA-PSO Algorithm following equation(Eq. 3)


(Radcliffe, 1991):
For the purpose of comparing the per formance of the
( 1−β)
Iinew =β· IiM + · IiF (3)
deep learning model, a hybrid GA-PSO algorithm was coded
using python, embedded to the CMG simulator and applied where Iinew is the ith variable in a new individual, IiM and IiFare
to optimize horizontal well locations. The main feature of this the property values with the same vector for the parents.
algorithm combines the strength of diversity of GA and faster β is a combining coefficient between zero and one that can
convergence from PSO by incorporating the standard velocity be kept constant or randomly determined for each attribute
and position up-date rules of PSO into the selection, crossover, throughout the crossover process.
and mutation from GA(Jeong, Hasegawa, Shimoyama, and The mutation operator is another significant GA operator
Obayashi, 2009; Settles and Soule, 2005)
. Furthermore, for that, via the following equation(Eq. 4)
, changes a particular
the same reser voir model and optimization problem, the element of a solution vector or individual to a new state based
performance of standalone algorithms, GA and PSO, was also on the mutation rate(Haupt and Haupt, 2004):
compared to hybrid GA-PSO.
Iinew = Iiold +σ× N(0, 1 ) (4)
The flowchart(Fig. 4)summarizes hybrid GA-PSO. The
algorithm begins by randomly generating initial individuals to where the property values Iiold and Iinew are before and after
solve an n-dimensional optimization problem. At the evaluation the mutation, respectively. N is a randomly assigned number
step, each individual s fitness is evaluated by the CMG of ranges from zero to one, and σ is property variance.
simulator to compute its objective function. Next, individuals In the PSO technique, the new individuals produced from
are ranked according to fitness, and the best solution vectors GA are utilized for the initial particles. Then, particle velocity
are introduced into the GA via recombination operators to and position are modified. The objective function of the newly
generate new individuals. Finally, the crossover operator generated solutions for running the entire run is evaluated
is used to update the best individuals with a crossover and sor ted. Once the maximum number of iterations is
probability by combining a number of two parents with the completed, the algorithm terminates.

Fig. 4 Flowchart of hybrid GA-PSO

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 59

Table 9 Table of parameters for hybrid GA-PSO and standalone algorithms

GA Value PSO Value


Mutation rate 0.2 Inertia weight 0.7298
Crossover rate 0.8 Acceleration constants 1.4962
Selection fraction 1 Swarm size 40
Population size 40

The parameters used for Hybrid GA-PSO are summarized and small aquifer(Oluwasanmi, Pastor, Charles, Christopher,
in Table. 9. and Seyi, 2021)
. These variations are valuable to demonstrate
the magnitude of the coning phenomenon.
3. RESULTS AND DISCUSSIONS
3.2 Deep Learning Models
3.1 Synthetic Database One of the main objectives of this paper is to predict the
In total, there are 5832 reservoir models generated from horizontal well placement using reser voir parameters. The
these parameter combinations. A combination of reser voir developed deep learning model should have acceptable
, oil column height(ho), and dip generated 27
thickness(h) accuracy and less variance in error. Therefore, determining
structurally different types of reser voirs(M_01 to M_27). the optimal structure and configurations compatible with
Fig. 5 shows the reservoir size distribution; the model with this problem is the most critical step in designing the neural
the same thickness but low dip contains more grid blocks. network.
The original oil in place(OOIP)ranges from 0.085 – 6.76 Several strategies were implemented via stagewise
MMSTB, while the original gas in place(OGIP)ranges from model development to increase the model accuracy, such as
587.39 – 72,998.1 MMSCF and original water in place(OWIP) increasing the number of hidden layers and neurons, using
ranges from 0.56 – 71.31 MMSTB. different weight initialization, and tuning the hyperparameters.
There are four categories of reservoir generated based on The following section presents the performance of each model
the size of gas cap and aquifer,(1)oil rim reser voirs with developed as described in the methodology.
small gas cap and aquifer,(2)oil rim reservoirs with small gas 3.2.1 Baseline Model
cap and large aquifer,(3)oil rim reservoirs with large gas cap The baseline model was defined as a sequence of layers
and large aquifer and(4)oil rim reservoirs with large gas cap comprising one hidden layer. Fig. 6 shows the performance

Fig. 5 Reservoir model size distribution of synthetic database

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


60 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

research, three hidden layers resulting in the smallest MSE


were selected for further topology improvement.
A wider network represents the increasing capacity of
the model. As shown in Fig. 7(b), the optimum number of
neurons for this problem is 32 neurons before the MSE starts
to rise due to the conformability of the model structure with
the data shape. It might occur as the model become more
complex than it needs(Hawkins, 2004)
. Consequently, it
would degrade the generalization ability of approximation. In
certain cases, a wider network would outperform a deeper
network.
Although there is a slight increase of MSE due to more
neurons, several strategies were employed to further reduce
Fig. 6 Baseline model performance the error by selecting Rectified Linear Unit(ReLU)as the
activation function. Recently, ReLU is prevalent in the machine
learning application because it provides better results,
simplicity, and fast computing(Glorot, Bordes, and Bengio,
of the baseline model with test error(MSE)of 6.134 × 2011)
. In addition, since the current structure is highly prone
10−3. This structure is prone to underfitting since the network to the risk of overfitting, a dropout was implemented. In this
is relatively shallow and the number of neurons is too few research, 10% was set as the dropout value between the input
(Narayan and Tagliarini, 2005)
. It is not sufficiently complex and the first hidden layers, suggesting one in nine inputs will
to capture the relationship between input and target; thus, the be randomly excluded from each update cycle.
bias is relatively high. Nevertheless, there is still much room 3.2.3 Hyperparameter Model
for improvement in topology and hyperparameters. The grid search results(Fig. 8)highlighted that the
3.2.2 Topology Model combination of the following hyperparameters produced the
In general, as the number of hidden layers and neurons smallest error with MSE: 1.86 × 10−3; No. of epochs: 150;
increases, the neural network will provide a lot of power batch size: 128; optimizer: Adam; weight initializers: uniform;
and flexibility, leading to increased accuracy(Hor nik, learning rate: 0.001.
Stinchcombe, and White, 1989)
. However, this is not A larger epoch(150)usually results in a more accurate
always the case since different problems require different model. However, the trade-of f time and model accuracy
configurations. Thus, experimenting with different topologies should be considered. The batch size determines the number
is required. of obser vations used to update the weights. The smaller
Fig. 7(a)shows the results of different topology depths. size(128)could lead to more stable training and does not
The model with a deeper topology allows the network to cause chaotic changes to the network, although the learning
extract and recombine higher-order features embedded in might be slightly slower. The choice of learning rate also
the dataset. However, the larger network structure could fail affects the learning condition since it controls the amount
in generalization as suggested by the increasing MSE. In this that the weights were updated. If it is too large, the weight

Fig. 7 (a) MSE of various no. of hidden layers; (b) MSE of various no. of neurons

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 61

optimization will not converge since it would never fall down Adam is the best combination. This optimization algorithm
the gradient; instead, it bounces back and forth across the combines adaptive moment estimation and RMSprop,
valley of a loss function(Roy, 1994)
. However, if it is too which utilize adaptive learning rate to help point toward the
low, it causes a slow learning condition. In this research, the minimum(Kingma and Ba, 2014)
. Also, Adam is a popular
combination of hyperparameters with a learning rate of 0.001 choice optimizer nowadays because it is easy to use and
resulted in the lowest MSE. ef ficient. Initializing a proper weight before the training
Choosing computationally efficient training methods is could accelerate the training process to achieve convergence.
essential since training is time-consuming. In this problem, Weights are usually initialized with small random values, and

Fig. 8 Grid search results

Fig. 9 Cross-validation of hyperparameter model

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


62 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

results of these evaluations are shown in Fig. 9. The average


error was 1.77 × 10−3, and the standard deviation of error
was 1.2 × 10−4. This standard deviation appears to prove that
the model is less affected by the variability when making the
prediction.
All the model errors were compared as reported in Fig. 10.
It can be observed that there is a considerable improvement
from the baseline model to the topology model. Finally, by
tuning the hyperparameters, there was a slight boost in the
accuracy. The final topology and configuration of the neural
network are summarized in Table 10. This result provides
Fig. 10 Model comparison compelling evidence of the impor tance of per forming
empirical experiments when developing a deep learning
model.
their function is similar to the coefficients used in a regression 3.2.5 Prediction
equation. In this research, random numbers were applied as The final model was deployed in the forecast mode to
initial weights. predict the optimum horizontal well locations. Several test
3.2.4 Model Evaluation sets were prepared(Table 11)as the inputs for conducting
The hyperparameter model was evaluated by performing predictions.
5-fold cross-validation to ensure that the model performs The results of predictions from five test sets for normalized
consistently for unseen datasets and reduces the amount of heel and toe locations are presented in Fig. 11. The predicted
variance in a prediction. This variance originates from random heel and toe values in(i, k)-coordinates are in good
weight initialization and different divisions of data into training agreement with the true values with the square of correlation
and test sets(Bouthillier et al., 2021)
. As a result, repeating coef ficient(R 2)of 0.998 and 0.959, respectively. These
the evaluation will produce slightly different predictions. The findings verify the consistency obtained from cross-validation

Table 10 Final model architecture

Topology and Configuration Value Remarks


No. of Features 9
No. of Targets 4 Heel-i, Heel-k, Toe-i, and Toe-k
No. of Hidden Layers 3
No. of Neurons 32
Activation Function ReLU
No. of Epochs 150
Batch Size 128
Learning Rates 0.001
Optimizers Adam
Initializers Uniform

Table 11 Test datasets

Parameters Test-1 Test-2 Test-3 Test-4 Test-5


h(ft) 50 50 100 200 200
hoil(ft) 25 50 50 25 75
dip(° ) 5 30 30 15 15
kv/kh 0.1 0.01 0.1 0.5 0.01
μg(cp) 0.01 0.025 0.025 0.05 0.01
μo(cp) 4 0.5 2 2 4
Vg/Vo 5 5 3 5 5
Vw/Vo 5 3 3 5 3
γo 0.7 0.85 0.7 0.7 0.85

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 63

and demonstrate that the developed deep learning model trajectories. As can be observed in most cases, the predicted
performs well in the unseen datasets. horizontal well locations have similar trajectories to the actual
The visualizations of denormalized results on the cross- trajectories.
sectional reser voir models are illustrated in Fig. 12. The The model was deployed to perform predictions on the new
grey lines denote the well trajectories generated from the datasets(Table 12)that exclude the training and test datasets.
deep learning model, while the purple lines are actual well This dataset was generated by performing an exhaustive

Fig. 11 Predictions from the model vs. true values on test dataset

Fig. 12 Visualization of horizontal well location predictions vs. true locations on test dataset

Table 12 New datasets for making predictions

Parameters Prediction-1(P1) Prediction-2(P2) Prediction-3(P3)


h(ft) 160 70 120
hoil(ft) 40 60 30
dip(° ) 25 10 20
kv/kh 0.3 0.06 0.15
μg(cp) 0.02 0.04 0.03
μo(cp) 3.5 3 1
Vg/Vo 3.5 4.5 4
Vw/Vo 4 3.5 4.5
γo 0.73 0.82 0.78

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


64 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

search to obtain true global optimum. The features were trajectories have similar to the true well trajectories. Although
designed within the range of the synthetic database generated P3 prediction deviated on the heel and toe locations, the
(Table 3)
. The aim is to observe the model s capability to trajectory is comparable with the true trajectory.
make predictions of the interpolated features. 3.3 Comparing Hybrid GA-PSO vs. GA vs. PSO
Fig. 13 shows the predictions from the new datasets. The performances of three algorithms were compared in
2 2
Overall, they have high accuracy with R of 0.994 and R of finding the optimum well placements. Due to the stochastic
0.913 for heel values and toe values, respectively. However, nature of these algorithms that are always initialized with
prediction set-3(P3)has slightly lower accuracy. This random populations, each algorithm was run three times in
inaccuracy may appear due to insufficient training datasets for which each run consisted of 1000 simulation runs, and the re-
predicting features of P3 that have large absolute differences sult was obtained by averaging these runs. It can be observed
from the training dataset. (Fig. 15)that Hybrid GA-PSO outperforms the GA and PSO
The denormalized predictions from new datasets are in terms of convergence rate and NPV obtained. The optimal
represented in Fig. 14. In general, the predicted well ,(33,10)
well placement for heel points are(33,9) ,(33,12)
,

Fig. 13 Predictions from the model vs. true values on new dataset

Fig. 14 Visualization of horizontal well location predictions vs. true locations on new datasets

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 65

3.4 Comparing Deep Learning vs. Hybrid GA-PSO


The deep learning model and hybrid GA-PSO results were
compared in terms of accuracy denoted in root mean squared
error(RMSE)and computation time, as summarized in Table
13. The true global optimum for each method was obtained
from an exhaustive search(random brute force method)
.
Remarkably, the predictions obtained from deep learning
have a small error(the root mean square error of 0.71)
and rapid computation time(three seconds). This error
magnitude implies that the prediction is slightly off by one
grid block where one grid block is five feet. By contrast, the
Fig. 15 The average of NPV for GA, PSO, and hybrid
solution found by the hybrid GA-PSO is precisely matched
HGA-PSO performances
with the true global optimum but with a longer computation
time. The convergence rate of hybrid GA-PSO can be
improved by tuning the optimizer parameters or setting the
and(33,12)for GA, PSO, HGA-PSO and exhaustive search termination criteria when there is no further improvement
(random brute force that can provide exact solutions), re- in the objective function. These results provide undeniable
spectively. Several works also reported that hybrid GA-PSO evidence that the developed deep learning model is robust
outperforms either GA or PSO in terms of convergence rate and reliable for quick optimization if supplied with sufficient
and accuracy, especially for optimizing large domain objective and representative dataset, which can be referred to as an
functions, and reliable to be used in many fields of science initial guess for a horizontal well location in the optimization
(Gandelli, Grimaccia, Mussetta, Pirinoli, and Zich, 2007; Wu, through detailed reservoir simulation.
Long, and Liu, 2015)
.
4. Conclusions and Future works
The GA converged at roughly 300 simulation runs, but the
PSO converged at approximately 500, with a minor improve- A synthetic database comprised of nine fundamental
ment of up to 1000 simulation runs. On the other hand, the parameters that influence recovery mechanisms in thin oil rim
NPV from the Hybrid GA-PSO gradually increased and begun reservoirs was generated. In total, there are 5832 reservoir
to level off when the number of simulations achieved around models in conjunction with the optimal horizontal well
800 runs. The final optimal NPV obtained from these algo- locations used for training a deep learning model.
4 4 4
rithms are $ 1.69 × 10 , $ 1.20 × 10 , $ 1.02 × 10 for Hybrid A stepwise strategy was proved effective in developing the
GA-PSO, PSO, and GA, respectively. deep learning model. The topology and hyperparameters
The solutions obtained by standalone GA and PSO is sen- tuning show a significant improvement in reducing the model
sitive to the parameters used(Andre, Siarr y, and Dognon, error. The 5-fold cross-validation reveals that the developed
2001; Bai, 2010)
. Therefore, they are susceptible to premature model is robust and has generalization capability, as indicated
convergence. These parameters can be further tuned to the by the small mean error and standard deviation error. The
specified problem to avoid premature convergence and have model deployment on unseen datasets demonstrated that
many simulation runs required to obtain the global optimum. the model is satisfactorily accurate in predicting heel and toe
However, tuning the parameters is a daunting task; there- locations.
fore, integrating GA-PSO alleviates these difficulties. It was Compared with the conventional optimization approach
revealed in this research that hybrid GA-PSO could provide using hybrid GA-PSO, the accuracy of the deep learning
more accurate and fast optimization of horizontal well loca- model is comparable with hybrid GA-PSO which can find
tions in thin oil rim than GA or PSO only. the true optimal solution. However, the computation time

Table 13 Comparing deep learning vs. hybrid GA-PSO

Horizontal Well Location(i, k)Coordinates


CPU Time for
Method True Global Optimum Prediction/Results RMSE
Prediction
Heel Point Toe Point Heel Point Toe Point
Deep Learning (53,22) (80,10) (52,23) (80,10) 0.71 3 secs
Hybrid GA-PSO (33, 19) (39,12) (33, 19) (39,12) 0 55 mins

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


66 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

required by the deep learning model is significantly less than for variance in machine learning benchmarks. Proceedings
that by hybrid GA-PSO. The proposed approach could be of the 4 MLSys Conference, San Jose, CA, USA, 2021.
applied to optimize horizontal well locations for every thin oil Cahyono, A. A., & Felder, A., 2010: Well Placement
rim identified without performing a conventional optimization Optimization for a Thin Oil Rim Development in the
process, which can be used to provide an initial guess for a Ujung Pangkah Field, East Java, Indonesia. Proceedings
horizontal well location in the subsequent detailed reservoir , Jakar ta,
of Indonesian Petroleum Association(IPA)
simulation. Indonesia, May 2010, IPA10-E-079.
Future work will incorporate actual data to train and update Chan, K. S., Masoudi, R., Karkooti, H., Shaedin, R., & Othman,
the model generalization capability. Different well trajectories M., 2014: Smart horizontal well drilling and completion for
should be considered when generating the database. It is effective development of thin oil-rim reservoirs in Malaysia.
advisable to include the capillary pressure and compositional International Petroleum Technology Conference, Kuala
phase behavior into the reser voir model to simulate more Lumpur, Malaysia, December, IPTC-17753-MS, doi:
realistic contact movement. 10.2523/IPTC-17753-MS/
Charniak, E., 2019: Introduction to Deep Learning: MIT Press.
5. Acknowledgment
Cho, H., 2001: Integrated optimization on long horizontal
The authors would like to convey their sincere thanks to well length. SPE Hydrocarbon Economics and Evaluation
Ms. Thong-On Oranich, a former master student of Waseda Symposium, Dallas, Texas, April, SPE-68599-MS, doi:
university, for her contribution on hybrid GA-PSO. 10.2118/68599-MS.
Forrest, J. K., Sukmana, A. Y., Suhana, W., & Asjhari, I., 2005:
Conversion factors
Reservoir Simulation Challenges for Modeling an Oil
lbm × 4.5359237* E − 01 = kg Rim with Large Gas Cap in the Poleng Field, Kujung-I
ft × 3.048* E − 01 = m Oil Reservoir, East Java Basin, West Madura Block,
psi × 6.894757 E + 03 = Pa Indonesia. SPE Asia Pacific Oil and Gas Conference and
ft³ × 2.831685 E − 02 = m³ Exhibition, Jakarta, Indonesia, April, SPE-93137-MS, doi:
cP × 1.0* E − 03 = Pa・s 10.2118/93137-MS.
bbl × 1.589874 E − 01 = m³ Gandelli, A., Grimaccia, F., Mussetta, M., Pirinoli, P., & Zich,
(˚F − 32)/ 1.8 =℃ R. E., 2007: Development and validation of dif ferent

Exact figure hybridization strategies between GA and PSO. 2007 IEEE
Congress on Evolutionary Computation, 2782–2787, doi:
REFERENCES
10.1109/CEC.2007. 4424823.
Aladeitan, Y. M., Arinkoola, A. O., Udebhulu, O. D., & Ogbe, Gardner, M. W., & Dorling, S., 1998: Artificial neural networks
D. O, 2019: Surrogate modelling approach: A solution to (the multilayer perceptron)̶a review of applications in
oil rim production optimization. Cogent Engineering, 6
(1)
, the atmospheric sciences. Atmospheric environment, 32
1631009. , 2627–2636.
(14–15)
Ampomah, W., Balch, R., Cathar, M., Will, R., Lee, S., & Dai, Ghosh, S., Stephenson, W. T., Nguyen, T. D., Deshpande, S.
Z., 2016: Performance of CO2-EOR and storage processes K., & Broderick, T., 2020: Approximate cross-validation for
under uncertainty. SPE Europec featured at 78th EAGE structured models. arXiv preprint arXiv:2006.12669.
Conference and Exhibition, Vienna, Austria, May, SPE- Glorot, X., Bordes, A., & Bengio, Y., 2011: Deep sparse
180084-MS, doi: 10.2118/180084-MS. rectifier neural networks. Proceedings of the fourteenth
Andre, J., Siarr y, P., & Dognon, T., 2001: An improvement international conference on ar tificial intelligence and
of the standard genetic algorithm fighting premature statistics, PMLR 15:315–323.
convergence in continuous optimization. Advances in Haupt, R. L., & Haupt, S. E., 2004: Practical genetic algorithms:
engineering software, 32
(1)
, 49–60. John Wiley & Sons.
Badru, O., & Kabir, C., 2003: Well placement optimization in Hawkins, D. M., 2004; The problem of overfitting. Journal of
field development. SPE Annual Technical Conference and chemical information and computer sciences, 44
(1)
, 1–12.
Exhibition, Denver, Colorado, October, SPE-84191-MS, Hornik, K., Stinchcombe, M., & White, H., 1989: Multilayer
doi: 10.2118/84191-MS. feedforward networks are universal approximators. Neural
Bai, Q., 2010: Analysis of par ticle swarm optimization networks, 2
(5)
, 359–366.
algorithm. Computer and information science, 3
(1)
, 180. Islam, J., Vasant, P. M., Negash, B. M., Laruccia, M. B., Myint,
Bouthillier, X., Delaunay, P., Bronzi, M., Trofimov, A., M., & Watada, J., 2020: A holistic review on ar tificial
Nichyporuk, B., Szeto, J., . . . Voleti, V., 2021: Accounting intelligence techniques for well placement optimization

石油技術協会誌 87 巻 1 号(2022)
Utomo Pratama Iskandar, Kazuki Abe and Masanori Kurihara 67

problem. Advances in engineering software, 141, 102767. Technology, 2


(5)
, 352–368.
Jang, I., Oh, S., Kim, Y., Park, C., & Kang, H., 2018: Well- Olamigoke, O., & Peacock, A., 2009: First-pass screening of
placement optimisation using sequential artificial neural reservoirs with large gas caps for oil rim development.
networks. Energy Exploration & Exploitation, 36( 3)
, Nigeria Annual International Conference and Exhibition, Abuja,
433–449. Nigeria, August, SPE-128603-MS, doi.org/10.2118/128603-
Jaoua, M., & Rafiee, M., 2019: Optimization of oil production MS.
in an oil rim reservoir using numerical simulation Oluwasanmi, O., Pastor, A.-N., Charles, O., Christopher, N.,
with focus on IOR/EOR application. SPE Reser voir & Seyi, O., 2021: Optimizing productivity in oil rims:
Characterisation and Simulation Conference and simulation studies on water and gas injection patterns.
Exhibition, Abu Dhabi, UAE, September, SPE-196709-MS, Arabian Journal of Geosciences, 14
(7)
, 1–20.
doi: 10.2118/196709-MS. Onwukwe, S. I., Obah, B., & Chukwu, G., 2012: A model
Jeong, S., Hasegawa, S., Shimoyama, K., & Obayashi, S., 2009: approach of controlling coning in oil rim reservoirs. Nigeria
Development and investigation of efficient GA/PSO-hybrid Annual International Conference and Exhibition, Lagos,
algorithm applicable to real-world design optimization. Nigeria, August, SPE-163039-MS, 10.2118/163039-MS.
IEEE Computational Intelligence Magazine, 4
(3)
, 36–44. Radcliffe, N. J., 1991: Forma Analysis and Random Respectful
John, I. J., Matemilola, S., & Lawal, K., 2019: Simple guidelines Recombination. ICGA, vol. 91, 222–229.
for screening development options for oil-rim reservoirs. Roy, S., 1994: Factors influencing the choice of a learning
SPE-198718-MS. SPE Nigeria Annual Inter national rate for a backpropagation neural network. Proceedings
Conference and Exhibition, Lagos, Nigeria, August, SPE- of 1994 IEEE Inter national Conference on Neural
198718-MS, doi: 10.2118/198718-MS. Networks(ICNN 94)
, 503-507 vol.1, doi: 10.1109/
Kingma, D. P., & Ba, J., 2014: Adam: A method for stochastic ICNN.1994.374214.
optimization. arXiv preprint arXiv:1412.6980. Settles, M., & Soule, T., 2005: Breeding swarms: a GA/PSO
Leshno, M., Lin, V. Y., Pinkus, A., & Schocken, S., 1993: hybrid. Proceedings of the 7th annual conference on
Multilayer feedfor ward networks with a nonpolynomial genetic and evolutionar y computation, Washington DC,
activation function can approximate any function. Neural USA, June, 161–168, doi: 10.1145/1068009.1068035.
networks, 6
(6)
, 861–867. Vo, D., Waryan, S., Dharmawan, A., Susilo, R., & Wicaksana,
Masoudi, R., Karkooti, H., & Othman, M. B., 2013: How to R., 2000: Lookback on per formance of 50 horizontal
get the most out of your oil rim reservoirs? International wells targeting thin oil columns, Mahakam Delta, East
Petroleum Technology Conference, Beijing, China, March, Kalimantan. SPE Asia Pacific Oil and Gas Conference and
IPTC-16740-MS, doi: 10.2523/IPTC-16740-MS. Exhibition, Brisbane, Australia, October, SPE-64385-MS,
Nakajima, L., & Schiozer, D., 2003: Horizontal well placement doi: 10.2118/64385-MS.
optimization using quality map definition. Canadian Wagenhofer, T., & Hatzignatiou, D., 1996: Optimization
International Petroleum Conference, Calgary, Alberta, June, of horizontal well placement. SPE Western Regional
PETSOC-2003-053, doi: 10.2118/2003-053. Meeting, Anchorage, Alaska, May, SPE-35714-MS, doi:
Narayan, S., & Tagliarini, G., 2005: An analysis of underfitting 10.2118/35714-MS.
in MLP networks. 2005 IEEE Inter national Joint Wu, J., Long, J., & Liu, M., 2015: Evolving RBF neural
Conference on Neural Networks, 984–988 vol. 2, doi: networks for rainfall prediction using hybrid particle swarm
10.1109/IJCNN.2005.155598. optimization and genetic algorithm. Neurocomputing, 148,
Nasrabadi, H., Morales, A., & Zhu, D., 2012: Well placement 136–142.
optimization: A survey with special focus on application Zandvliet, M., Handels, M., van Essen, G., Brouwer, R.,
for gas/gas-condensate reservoirs. Journal of Natural Gas & Jansen, J.-D., 2008: Adjoint-based well-placement
Science and Engineering, 5, 6–16. optimization under production constraints. Spe Journal, 13
Nguyen, Q. H., L y, H.-B., Ho, L. S., Al-Ansari, N., Le, H. , 392–399.
(04)
V., Tran, V. Q., Pham, B. T., 2021: Influence of data
splitting on per formance of machine learning models
in prediction of shear strength of soil. Mathematical リム型薄油層における水平坑井配置の最適化の
Problems in Engineering, vol. 2021, Article ID 4832864, ための深層学習によるデータ駆動型アプローチ
doi: 10.1155/2021/4832864. ウトモ プラタマ イスカンダール
Nicot, J. P., & Duncan, I. J., 2012: Common attributes of 阿部 和希・栗原 正典
hydraulically fractured oil and gas production and CO2
geological sequestration. Greenhouse Gases: Science and 通常の坑井(垂直井)を用いて薄いオイルリムを経済的

J. Japanese Assoc. Petrol. Technol. Vol. 87, No. 1(2022)


68 A data-driven approach for optimizing horizontal well placement in thin oil rim reservoirs using deep learning

に開発することは困難である。そのため,垂直井の欠点を ド手法を用いた数値的な最適化プログラムを作成し,それ
克服すべく,水平井が広く使用されているが,その適用を による最適化結果を,精度と計算時間の観点から,上記の
成功させるために重要になるのが最適な坑井配置である。 深層学習モデルによって得られた最適解と比較した。仮想
しかしながら,従来の最適化手法では,多大な計算時間と のデータセットに基づく深層学習によって最適であると予
人力が必要となる。本研究は,深層学習モデルを利用する 測された水平井の heel と toe 位置は十分に正確であり,
ことにより,水平井の付け根(heel)と先端(toe)の位置 ハイブリッド GA-PSO 法によって求められた最適解と同等
を最適化するデータ駆動型アプローチを提案するものであ であったが,計算に要した時間は極めて短いものであった。
る。まず,学習モデルを訓練するために,薄いオイルリム 本研究で開発したモデルは,特にデータが不足している場
の回収メカニズムに影響を与える 9 つの基本的なパラメー 合でも,さまざまな薄いオイルリムの特性に対して普遍的
ターを変化させて,仮想データベースを生成した。さら な最適解を提供でき,オイルリムの開発が計画されている
には,最新の最適化手法の 1 つである遺伝的アルゴリズム 全てのタイプの油層における水平井の坑井位置の最適化計
(GA)と粒子群最適化法(PSO)を組み合わせたハイブリッ 算に要する時間を大幅に短縮することができる。

石油技術協会誌 87 巻 1 号(2022)

View publication stats

You might also like