Early Time Decline Curve Analysis ML Paper Texas A&M

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

SPE-188231-MS

Modeling Early Time Rate Decline in Unconventional Reservoirs Using


Machine Learning Techniques

Aditya Vyas and Akhil Datta-Gupta, Texas A&M University; Srikanta Mishra, Battelle

Copyright 2017, Society of Petroleum Engineers

This paper was prepared for presentation at the Abu Dhabi International Petroleum Exhibition & Conference held in Abu Dhabi, UAE, 13-16 November 2017.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents
of the paper have not been reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect
any position of the Society of Petroleum Engineers, its officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written
consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not more than 300 words; illustrations may
not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
Decline curves are fast methods to predict production behavior in oil and gas wells. Some of the notable
decline curve methods are Arp’s, SEDM (Stretched Exponential Decline Model), Duong’s Model and
Weibull decline curves. Available production history data can be used to fit any of these equations and
future production decline can thus be extrapolated. However, when limited production data is available
during early periods of well history, these equations could be fitted using inaccurate parameters leading
to erroneous predictions. Also, the traditional decline curve analysis approach does not account for the
complexities related to reservoir description and well completions.
This study utilizes publicly available databases of the Eagle Ford formation to develop a novel predictive
modeling methodology linking decline curve model parameters to well completion related variables that
allows for the rapid generation of synthetic decline curves at potential new well locations. Modern machine
learning algorithms such as Random Forests (RF), Support Vector Machines (SVM) and Multivariate
Adaptive Regression Splines (MARS) can then be used to model the well decline behavior. Cross-Validation
technique such as k-fold cross-validation can be used to quantify the predictive accuracy of these models
when applied to new wells.
First, production data are fitted to decline curve models to estimate the corresponding model parameters.
Next, machine learning models are built for these parameters as a function of initial flow rate, various well
completion parameters (i.e., number of hydraulic fracture stages, completed lengths, proppant and fracturing
fluid amounts) and well location/depth parameters (i.e., well latitudes, longitudes, total vertical depth of
heel and difference between total vertical depths of heel and toe of horizontal wells). These models are
used to rapidly predict the decline curves for new or existing wells without the need for costly reservoir
simulators. It has been found that accurate prediction of rate decline of new wells can be predicted using
this methodology. This method can also predict ultimate recovery of a new well based on data collected
from previous wells.
To our knowledge, this is the first time machine learning algorithms have been used to predict the decline
curve parameters and examine the relative performance of various decline curve models. The power and
utility of our approach are demonstrated by successful prediction of the decline behavior of blind wells
that were not incorporated in the analysis. We also examine the relative influences of various well design
2 SPE-188231-MS

and location variables to determine the hidden correlations or interactions among them which are hard to
decipher with other methods.

Introduction
It has been many years since shale oil and gas revolution started in the USA. Due to a significant number
of wells already drilled and produced from these reservoirs, a large amount of horizontal well data has been
gathered by various companies and a large amount of such data is made available in many of the online
databases. Data analytics tools can be used to study this available data to determine the hidden trends and
sweetspots in these reservoirs that may help future development of horizontal wells. This paper deals with
gathering horizontal well completion and production data and using various data mining concepts to draw
inferences from the online public data available for Eagle Ford. Oil production and completions data is
gathered from Drillinginfo’s online database and cleaned/formatted to make it usable for this study. Fig. 1
shows the Eagle Ford region which has three types of petroleum windows – oil, wet gas/condensate and
dry gas wells. The current study however, focuses on the oil window only which is of the primary interest
for the companies. Oil well data has been collected from wells that have all the required predictor variables
involved in this study. These wells are spread at various locations in the oil window region. The Eagle Ford
well database is then analyzed using various data mining tools and techniques to infer useful information
from the data. To predict the production decline in Eagle Ford oil wells, decline curve models such as Arp’s,
Stretched Exponential Decline Model (SEDM), Duong’s Model and Weibull growth curve have been used
in this study.

Figure 1—Eagle Ford region with three types of petroleum windows (US Energy Information Administration)

Decline Curve Models


Decline curve models are equations used to fit existing oil or gas well production data and predict future
decline of those wells. They are easy to use and faster alternative to more complex reservoir simulations
that require detailed information about the reservoir and completion parameters. The decline curves usually
requires only a few parameters in their equations that can be tuned using existing data. Various decline curves
are used in the industry for modelling different types of reservoir conditions. Some of the most commonly
SPE-188231-MS 3

used decline curves used for this purpose are Arp’s decline curves, Stretched Exponential Decline Model,
Duong’s Model and Weibull growth curve. A brief description of these models are provided below:

Arp’s Model
Arp’s decline equation (1954) can be represented as follows:

(1)

where,
q(t) = rate at a time t, STB/D
qi = initial rate, STB/D
Di = initial decline rate, months-1
b = hyperbolic decline coefficient, dimensionless
t = time, months
The value of exponent b gives type of decline in a given well (Table 1):

Table 1—Arp’s decline type based on b value

b value Decline Type

b=0 Exponential

0<b<1 Hyperbolic

b=1 Harmonic

Stretched Exponential Decline Model (SEDM)


Valko and Lee (2010) used SEDM to forecast production from unconventional wells. It was argued that
since shale gas wells are under transient flow regime, Stretched Exponential Decline Model is more suitable
candidate for such wells than Arp’s. The stretched Exponential Decline Model is given by:

(2)

where,
q(t) = rate at a time t, STB/D
qi = initial rate, STB/D
τ = characteristic relaxation time, months
n = exponent parameter, dimensionless
t = time, months
The stretched exponential decay of a quantity can be interpreted as a sum (integral) of pure exponential
decays with a "fat tailed" probability distribution of the time constants (Johnston 2006). This is equivalent
to say that actual production decline can be interpreted as sum of large number of individual exponential
decays. Therefore, SEDM, is a suitable candidate for heterogeneous reservoirs (Valko and Lee, 2010)
For positive values of qi, τ and n, SEDM gives finite value of Estimated Ultimate Recovery (EUR). This
is unlike Arp’s which may sometimes predict physically unrealistic values of EUR for b ≥ 1 (Valko and
Lee, 2010).

Duong Model
For fracture dominated characteristics in production decline, Duong (2011) proposed following empirically
derived decline model for tight gas and shale gas reservoirs.
4 SPE-188231-MS

(3)

where,
q(t) = rate at a time t, STB/D
q1 = flow rate on first day, STB/D
a = intercept constant
m = slope parameter
t = time, months

Weibull Model
Weibull growth curve (1951) is derived from Weibull distribution (1951) widely used for modeling time-
to-failure in applied engineering problems.

(4)

where,
GP = cumulative production at time t
M = Carrying capacity (Max. Cum production)
γ = shape parameter
α = scale parameter
t = time, months
Differentiating Eq. 4 gives:

(5)

where,
q(t) = rate at a time t, STB/D
An advantage of Weibull model over Hyperbolic model is that the carrying capacity, M in Eqs. 4 and 5
puts an upper bound limit to production. The scale parameter, α is that value of time at which (1-1/e) or
63.2% of the resources have been produced (Mishra, 2012). The shape factor, γ gives the rate of rate decline
and is usually less than 1.

Machine Learning Algorithms


Various machine learning algorithms such as Random Forests (RF), Support Vector Machines (SVM)
and Multivariate Adaptive Regression Splines (MARS) have been utilized in this study to predict various
decline model parameters discussed earlier in this paper. These algorithms can then be used to predict
those parameters for new wells that were not used to build these machine learning models. These machine
learning algorithms can also be used to find out the relative influence of various predictor variables on
response variable. Results from these machine learning algorithms have been gathered and their predictive
capabilities have been compared for Eagle Ford database to determine which algorithm is most suited for
this type of data. A short description of the three machine learning algorithms are given below.

Random Forests (RF)


A Random Forest (Breiman, 2001) is an ensemble based machine learning algorithm which is comprised
of a large no. uncorrelated trees (classification or regression trees). Each individual tree in a random forest
is modeled from a bootstrap subsample of training data and a subsample of predictor variables. The final
prediction for a test data is calculated by using majority vote (for classification) or averaged response (for
regression). Although Random Forest can be applicable to both classification and regression problems, this
SPE-188231-MS 5

paper is concerned only with regression models. To model a regression tree, a bootstrap sample of data is
collected from training data with replacement. A regression tree is constructed by repeated partitioning of
data variable space such that Residual Sum of Squares at each node is minimized.
Residual Sum of Squares, RSS (Shalizi, 2006) is given by:
(6)

(7)

c = no. of nodes
nc = no. of data points in a node
yi = observed or actual response value
To accomplish this, a split is done at each node such that the RSS reduces the most. This is done by
comparing various possibilities of splits using different variables and different points of splits in those
variables. After the split is made and two nodes are generated, further splits are made in those nodes in a
similar way. This process stops once number of data points in a node reaches certain predefined limit.

Support Vector Machines (SVM) Regression or Support Vector Regression (SVR)


The support vector regression (Smola and Schölkopf, 2004) finds a function f(x) that has at most ɛ deviation
from the actually obtained targets yi for all the training data and at the same time is as flat as possible (Fig.
2). Errors less than ɛ are acceptable, but not more than that.

Figure 2—The soft margin loss setting for a linear SVM (Smola and Schölkopf, 2004)

The slack variables, ξi, ξ∗i are introduced (Cortes and Vapnik, 1995, Smola and Schölkopf, 2004) to
include points with deviations more than ε. Constant C > 0 determines the trade-off between the flatness of
f(x) and the amount up to which deviations larger than ɛ are tolerated.
Therefore, goal is to find: , by:
(8)

(9)

Above optimization problem can be solved in its dual form by using Lagrange’s multipliers (αi, α#i). The
function equation finally reduces to following:
(10)
6 SPE-188231-MS

(11)
where,
αi, α#i =lagrange’s multiplier
<.,.> = dot product
The problem with non-linearity in data can be resolved by preprocessing the training patterns xi by a
mapping Φ:Χ→ℱ into some feature space ℱ (Aizerman et al., 1964 and Nilsson, 1965) and then applying
the above regression algorithm.
(12)
Thus, the optimization problem corresponds to finding the flattest function in feature space, not in input
space.

Multivariate Adaptive Regression Splines (MARS)


Freidman (1991 and 1993) proposed Multivariate Adaptive Regression Splines (MARS). It builds a model
of the form
(13)
where,
a0 = constant
are the coefficients of expansion whose values are determined by least square fit of above equation:

(14)

Bm(X) = basis function which can be a constant, a hinge function or product of more than one hinge
functions
(15)
Bm(X) are given by any combination of one or more of the following hinge functions:

(16)

Constant t is called as a knot. This is the point at which model function f(X) changes direction.
Thus MARS model becomes (Friedman 1991):
(17)

The usual procedure to fit MARS model comprises of Forward Pass, Backward Pass and Generalized
Cross-Validation (GCV). Forward Pass creates a superset of Bm(X) which usually is an overfitted model
using training sample. MARS adds basis functions that reduces residual error in steps using greedy
algorithm. Final model from forward pass must not contain more than maximum allowable no. of terms.
Backward Pass then selectively removes one basis function at a time with a goal of producing most
generalizable approximation. At each step, a term that increases the residual error the least is removed. GCV
is used to minimize residual error but at the same time keep number of terms at check.
SPE-188231-MS 7

(18)

yi = observed values
=model predicted values
N = no. of observations/predictions
C(M) = cost complexity function#no. of basis functions used in model

Model Averaging
Instead of making prediction based on a single model trained using single training data set, multiple models
derived using different subsamples can be combined together using the concept of Model Averaging.
This helps in dealing with model tuning parameter uncertainty. In this paper, following model averaging
techniques are used and compared for this purpose. In this method, model weights can be determined for
multiple models predictions from various models for test data are combined.

Bayesian Model Averaging (BMA)


Weights corresponding to a model j is given by (Draper 1995, Kass and Raftery 1995 and Hoeting et al.
1999):

(19)

where,
p(Mj) = prior probability of Model j
p(D|Mj) = model likelihood given by prediction error for data D = ∫ P(d|θj, Mj)p(θj|Mj)dθj
P(d|θj, Mj) = joint probability of a model j (function of prediction errors)
p(θj|Mj) = prior probabilities of parameters

Generalized Likelihood Uncertainty Estimation (GLUE)


This method simplifies model likelihood as (Beven and Binley 1992, Beven 2000):

(20)

where,
N = shape factor
σe,j = variance of the errors of model
σo = variance in the observed data
N ≫ 1 tends to give higher weightage to models with less fitting error
N ≪ 1 tends to give similar weights to all models
Therefore, model weights are given by:

(21)

However, model likelihood function in GLUE methodology can be substituted by simpler functions
(Mishra 2012):
8 SPE-188231-MS

(22)

where,
RMSEj = Root Mean Square Error of Model j to observed data

(23)

Model Averaged Response can be calculated by weighted average of Response from multiple models as:
(24)

Exploratory Data analysis and Prediction Workflow


Data collected from drillinginfo website is first organized properly in excel sheets and outlier wells are
removed which have unusual high or low production or unusual completion parameters. Only wells with
more than 12 months of production history are retained for this study. Table 2. shows different predictor and
response variables used for different models. In addition to prediction of decline curve parameters discussed
previously, machine learning algorithms can also be used to build model that can predict EUR of a new well
using data collected from existing wells in the region. Here, RF, SVM and MARS algorithms have been
utilized in order to predict EUR for new wells.

Table 2—Predictors, Responses and Machine Learning Algorithms used for different Decline Models.

Machine Learning
Decline Model Predictors Responses
Algorithms

Arps RF, SVM and MARS Initial Flow Rate (qi), total b, Di, EUR
proppant amount, total
SEDM fracturing fluid amount, τ, n, EUR

Duong stages, completed length, a, m, EUR


TVD of heel, TVD heel-toe
Weibull difference, latitude and longitude γ, α, EUR, M

Exploratory analysis is done using cluster analysis in which well data is divided into four different clusters
based on quartiles of initial flow rate (assumed to be equal to maximum flow rate in this study). Fig. 3 shows
the four clusters and their location on Texas map. Cluster no. 4 contains the well with the highest initial
flow rates and cluster no. 1 contains the wells with the lowest initial flow rates. Fig. 4 presents the variable
distributions of different variables in the four clusters under investigation.
SPE-188231-MS 9

Figure 3—Four clusters of the study wells based on quartiles

Figure 4—Variable distribution in the four clusters

Fig. 5 shows the regions of high and low TVD and compares them together with clusters 1 and 4. It can
be inferred from this figure that most of the high performance wells lie in deeper regions of Eagle Ford.
But there are also some outlier wells that suggests that TVD should not be chosen as the sole criteria to
determine initial flow rates/well performance.

Figure 5—Exploratory Analysis of Eagle Ford Data


10 SPE-188231-MS

In further machine learning based predictions, however, all four cluster data are utilized. Fig. 6 shows the
detailed prediction flow. Well data is used to fit decline curves and corresponding decline curve parameters
are gathered based on best fit on the observed data. These parameters become the response variables for
machine learning algorithms. The data is then divided into training and test datasets. Only training data set
is used to build models while test data set is for testing prediction accuracy of those models. To build a
machine learning model using training dataset, one option is to build a single model using entire training
dataset. This however can lead to significant errors due to variability in data. Therefore, in this paper a
different approach is followed using Cross Validation using k-fold Validation and Model Averaging using
GLUE. Training data is divided into multiple folds (k-folds) or subgroups of data points and a model is
tested in one of the folds while using remaining folds for learning. This is done using several combinations
of tuning parameters in respective machine learning algorithms such as RF, SVM and MARS. Therefore
there will be multiple models from a single training data set. GLUE based model averaging technique is
applied to determine the weights for each of these models. Finally, decline curve for test data wells are
predicted as weighted average of responses from these models.

Figure 6—Workflow to build multiple models and averaging them


SPE-188231-MS 11

Relative Influence of Predictor Variables


When predicting a response variable such as the characteristic time in case of SEDM, different predictor
variables influence the predictability of a machine learning model differently. The relative influence of a
predictor variable is calculated in this study as the relative change in coefficient of determination, R2 if that
predictor variable is removed from the set of predictors to build model.
Relative influence of pth predictor is calculated using Coefficient of Determination, R2. It indicates the
proportion of variance in the dependent variable that is predictable from a model.
Relative Influence of a predictor p is given by:

(25)

R2p = R2 of model when all predictors are included


R2-p = R2 of model when all predictors except pth predictor are included
R2 indicates the proportion of variance in the dependent variable that is predictable from a model.

(26)

yi = observed value of ith data point


ŷi = predicted value of ith data point
ȳi = mean of observed values

Results and Discussion


Fig. 7 shows comparison between predicted and actual values of various Arps decline curve parameters.
Fig. 8 shows the comparison of training and test data error for EURs. It may be observed from this figure
that SVM gives the least error. Once decline parameters are predicted for a test well, its decline curve can
be quickly generated and compared with actual production rate data. This has been shown in these Fig. 9
for some of the test wells using the machine learning algorithm with least test data RMSE error for EURs
12 SPE-188231-MS

Figure 7—Arps Decline Model parameter prediction using different Machine Learning Algorithms

Figure 8—Test data error comparison for predicting ARPS_EUR


SPE-188231-MS 13

Figure 9—Arps Decline Model based prediction for test wells using SVM

SEDM Model Prediction


Fig. 10 shows comparison between predicted and actual values of various Arps decline curve parameters.
Fig. 11 shows the comparison of training and test data error for EURs. It may be observed from this figure
that SVM gives the least error. Once decline parameters are predicted for a test well, its decline curve can
be quickly generated and compared with actual production rate data. This has been shown in these Fig. 12
for some of the test wells using the machine learning algorithm with least test data RMSE error for EURs.
14 SPE-188231-MS

Figure 10—SEDM Decline Model parameter prediction using different Machine Learning Algorithms

Figure 11—Test data error comparison for predicting SEDM_EUR


SPE-188231-MS 15

Figure 12—SEDM Decline Model based prediction for test wells using SVM

Duong Model Prediction


Fig. 13 shows comparison between predicted and actual values of various Arps decline curve parameters.
Fig. 14 shows the comparison of training and test data error for EURs. It may be observed from this figure
that SVM gives the least error. Once decline parameters are predicted for a test well, its decline curve can
be quickly generated and compared with actual production rate data. This has been shown in these Fig. 15
for some of the test wells using the machine learning algorithm with least test data RMSE error for EURs.
16 SPE-188231-MS

Figure 13—Duong’s Decline Model parameter prediction using different Machine Learning Algorithms

Figure 14—Test data error comparison for predicting DUONG_EUR


SPE-188231-MS 17

Figure 15—Duong’s Decline Model based prediction for test wells using RF

Weibull Model Prediction


Fig. 16 shows comparison between predicted and actual values of various Arps decline curve parameters.
Fig. 17 shows the comparison of training and test data error for EURs. It may be observed from this figure
that SVM gives the least error. Once decline parameters are predicted for a test well, its decline curve can
be quickly generated and compared with actual production rate data. This has been shown in these Fig. 18
for some of the test wells using the machine learning algorithm with least test data RMSE error for EURs.
Fig. 18 shows that for a few wells, WEIBULL-RF combination results in higher mismatch during initial
period of predictions even although the cumulative production matches closely.
18 SPE-188231-MS

Figure 16—Weibull’s Decline Model parameter prediction using different Machine Learning Algorithms

Figure 17—Test data error comparison for predicting WEIBULL_EUR


SPE-188231-MS 19

Figure 18—Weibull’s Decline Model based prediction for test wells using RF

Fig. 19 compares overall performance of the three machine learning models with the four decline models.
It may be seen here that SEDM-SVM and WEIBULL-RF combinations predict EUR with comparatively
lower errors. However, as previously mentioned, WEIBULL-RF may predict early time rates with lesser
accuracies in some wells. Another observation that can be made here is that MARS algorithm is consistently
underperforming both in terms of RMSE errors as well as R2 error. This is because MARS algorithm has
20 SPE-188231-MS

higher RMSE errors compared to other algorithms and also R2 values farthest from unity compared to other
algorithms.

Figure 19—Test data error comparison for predicting EURs

Figs. 20 through 22 show relative variable rankings of various variables used to predict response
variables. In total, there are 12 cases (three machine learning algorithms and four decline curve models).
Overall, initial flow rate is most important variable since it is ranked highest with low rank variance. Fig.
21 shows that PROP_TOTAL is another highly ranked variable but with higher variability compared to
initial flow rate. Fig. 22 shows the frequency distribution of ranks. It can be observed here that among
PROP_TOTAL and TVD_HEEL, TVD_HEEL is more likely to be ranked higher than PROP_TOTAL
except in one outlier case where TVD_HEEL ranks lowest. This is the reason for overall rank variance of
TVD_HEEL to be more than that of PROP_TOTAL. Therefore, keeping all these figures into account and
neglecting the outlier case, TVD_HEEL can be considered as important as PROP_TOTAL.

Figure 20—Ranking distribution among all machine learning models


SPE-188231-MS 21

Figure 21—Ranking variance versus Average Rank

Figure 22—Ranking frequency distribution among all machine learning models

Conclusions
1. Rate decline model parameters for Arps, SEDM, Duong and Weibull decline models can be linked to
well completion and location variables using Machine Learning
2. Most suitable Machine Learning algorithms for predicting decline curve parameters for each of decline
models have been identified
3. Rate decline curves are predicted accurately for the four decline models and compared with observed
data of test wells
22 SPE-188231-MS

4. Overall, SEDM with SVM is found to be more suitable to predict flow rates compared to other models
using Machine Learning
5. Relative Variable Importance study shows that initial flow rate to be most influential predictor
followed by total proppant amount and total vertical depth.

Acknowledgements
The authors would like to acknowledge financial support from members of the Texas A&M University Joint
Industry Project, MCERI (Model Calibration and Efficient Reservoir Imaging). The authors of this paper
would also like to thank drillinginfo for providing license for their database.

Nomenclature
a Intercept Constant (Duong Model)
α or alpha Scale parameter (Weibull Model)
b Decline coefficient (Arps)
BMA Bayesian Model Averaging
DCA Decline Curve Analysis
Di Initial Decline Rate (Arps)
EUR Estimated Ultimate Recovery
γ or gamma Shape parameter (Weibull Model)
GCV Generalized Cross Validation
m Slope parameter (Duong Model)
M Carrying capacity (Weibull)
MARS Multivariate Adaptive Regression Splines
n Exponent parameter (SEDM)
qi Initial flow rate or Maximum Flow Rate
q1 Flow rate during first month (Duong Model)
RF Random Forest
RMSE Root Mean Square Error
R2 Coefficient of Determination
RSS Residual Sum of Squares
SEDM Stretched Exponential Decline Model
SVM Support Vector Machine
SVR Support Vector Regression
t Time elapsed during well production
τ Characteristic time (SEDM)
TVD Total Vertical Depth

References
Aizerman M. A., Braverman E. M., and Rozonoer L. I. 1964. Theoretical foundations of the potential function method in
pattern recognition learning. Automation and Remote Control 25: 821–837.
Arps, J. J. 1945. Analysis of Decline Curves. Trans. AIME: 160: 228–247.
Beven, K. J. 2000. Uniqueness of place and process representations in hydrological modelling. Hydrology and Earth
System Sciences 4, 203–213
Beven, K. J., and A. Binley. 1992. The future of distributed models: Model calibration and uncertainty prediction.
Hydrological Processes 6, 279–298.
Breiman, L. 2001. "Random forests," Machine Learning, vol. 45, no. 1, pp. 5–32
Cortes, C. and Vapnik, V. 1995. Support vector networks. Machine Learning 20: 273–297.
Cosma Shalizi. 2006. Statistics 36-350: Data Mining, Fall 2006 online lecture notes.
SPE-188231-MS 23

Draper, D. 1995. Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society: Series B
57, no. 1: 45–97.
Duong, A. N. 2011. "Rate-Decline Analysis for Fracture-Dominated Shale Reservoirs". SPEREE 14 (3): 377–387. http://
dx.doi.org/10.2118/137748-PA.
Hoeting, J. A., D. Madigan, A. E. Raftery, and C. T. Volinsky. 1999. Bayesian model averaging: A tutorial. Statistical
Science 14, no. 4: 382–417.
Johnston, D. C. 2006. Stretched Exponential Relaxation Arising From a Continuous Sum of Exponential Decays. Phys.
Rev. B 74: 184430
Kass, R. E., and A. E. Raftery. 1995. Bayes factors. Journal of the American Statistical Association 90, 773–795.
Mishra, S. A New Approach to Reserves Estimation in Shale Gas Reservoirs Using Multiple Decline Curve Analysis
Models. Paper SPE 161092 presented at the SPE Eastern Regional Meeting held in Lexington, Kentucky, USA, 3-5
October 2012.
Friedman, J. H. 1991. Multivariate Adaptive Regression Splines. The Annals of Statistics. Vol. 19. No. 1: 1–141.
Friedman, J. H. 1993. Fast MARS Stanford University Department of Statistics, Technical Report 110.
Nilsson, N. J. 1965. Learning machines: Foundations of Trainable Pattern Classifying Systems. McGraw-Hill.
Smola, A., J. and Sch–lkopf, B. 2004. "A tutorial on support vector regression." Statistics and Computing, vol. 14, no.
3, pp. 199–222.
Valko, P. P. and Lee, J. W. 2010. A Better Way To Forecast Production From Unconventional Gas Wells. Paper SPE
134231 presented at the SPE Annual Technical Conference and Exhibition, Florence, Italy, 19-22 September.
Weibull, W. 1951. A Statistical Distribution Function of Wide Applicability. J. Appl. Mech. 18: 293–297.

You might also like