Case Study Production Management Solution Back Allocation and Advance Well Monitoring Litoral Tabasco Asset

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

SPE 127924

A Case Study: “Production Management Solution “Back Allocation and


Advance Well Monitoring” – Litoral de Tabasco Asset
G. Olivares, O. Perez, Pemex
C. Escalona, C. Vargas, M. Baarda & J-C. Vernus, Schlumberger

Copyright 2010, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE Intelligent Energy Conference and Exhibition held in Utrecht, The Netherlands, 23–25 March 2010.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its
officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to
reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
There has been a significant increase in activity in recent years in the development of exploration fields in Mexico offshore
operations. Critical operational issues for managing these fields are the large volume of Excel files, disorganized acquisition,
and minimal sharing of the different sources of available data. In 2005, some PEMEX (Petroleos Mexicanos) Assets decided
to address this data gathering and analysis bottleneck by integrating all operational data into one single central data store to
allow the Asset engineers, involved in the monitoring and diagnostic field process, to access a unique and shared source of
field information, i.e. “One single version of the truth”. One of the most recent developments is AILT (Activo Integral Litoral
de Tabasco), located in the Marine Southwest Region – Gulf of Mexico, as is show in Fig. 1, where this case study was
developed.
When the project started, AILT had 7 platforms and 15 wells and now consolidates all production in the area with 44 wells
and 24 platforms using shared facilities. This required the implementation of a workflow to accurately determine the
production of each well by applying back allocation and near real-time operational conditions using multiphase measurements.
A series of automated workflows integrates all the operational data, from monitoring of the wells in real time to official
accounting and reporting system.
The automated workflows have provided a cost-effective solution that minimizes uncertainties in estimating production
volumes at the well level for the AILT area, and improves production consolidation required for official accounting purposes. The
automatic workflows have further enabled the production team to integrate all operational data, and analyze and monitor the
performance of each well as part of the integrated facilities network. In addition, rule-based concepts within these workflows
assist in monitoring reservoir drawdown and proactively prevent the contractual penalties related to batch delivery delay to Pemex
refineries.

Introduction
Nowadays the Petroleum industry is inclined towards intelligent field implementation in order to optimize the production,
reduce engineering process time to make pro-active decisions and optimize the use of the available resources. Pemex is clearly
following this path to achieve those objectives.
Oil and gas companies have to manage vast amounts of existing data, applications, and infrastructure. At the same time the
amount of data available continues to grow. The data intensity from permanent sensors has grown by a factor of six in the past 5
years.
The biggest problem production engineers are facing today is how to turn large quantities of data, provided by real time
systems, into useful information with which the production can be optimized. Other problems engineers are facing are:
• Quality of the information.
• High frequency data management.
• Production Engineering applications are not suited to handle high frequency data.
• No integration between these applications allowing a streamlined analysis & diagnosis.

The fundamental objective to overcome these problems is to put a process in place which enables the best use of this high
frequency data and is oriented towards the optimization of the decision making process. Today’s advances of technology in the
industry allows for such an integrated solution. There are numerous shared experiences and references from all around the world,
each with a varying degree of the level of complexity of such integration:
2 SPE 127924

• Integrated Data Management.


• Workflow Automation.
• Real time / on-line rate estimation (also referred as “virtual measurement”).
• Operational event detection.
• Diagnostic and Optimization.

The real added value of such implementation is to be able to rely on a system that provides the ability to take proactive and
preventive decisions. In case of a sudden occurring problem it is essential to keep the time between detection of the event and its
associated remedial action to a minimum. Such are the benefits of an intelligent field implementation solution, as is show in Fig.
2.

Problem definition
The AILT Asset is a reasonably new asset with a great development potential and with a significant potential to increase
production based on different naturally fractured reservoirs such as JSK, KM, KS and TCR. These reservoirs are characterized by
high reservoir pressures and temperatures (HPHT). Today AILT operates 42 wells and 24 offshore platforms and is one of the
assets in Mexico that pioneers in the production of light and super lightweight oil with API gravity between 36 and 42.
From a measurement standpoint the asset is facing some challenges as the main gathering platform handles the production for
the entire region including the production of other assets. The setup of the main platform does not allow the AILT asset to have its
own dedicated testing facility and therefore it generates uncertainties in the overall well production allocation. As is illustrated at
the Fig. 3a and 3b, the well test conditions do not match the real operational conditions as measured by the AILT asset using
permanent sensors or daily operator readings. Well tests are the main reference for the allocation process and a mismatch between
well test and daily reading clearly expose the challenge that AILT is facing in terms of a precise and equitable allocation process.
The most critical consequence was the asset’s inability to defend its own production contribution within the region and therefore
not being able to demonstrate its performance.
From a data management standpoint, and to add complexity to an already challenging situation, the asset was facing the
classic issue of a rapid growth of the development of the field without paying proper attention to the management of the data:
This omission resulted in hundreds of Excel files, disorganized data acquisition, isolated information on the engineer’s desk or
computer, no collaboration process and decisions made based on limited data availability.
To minimize the impact of such a setup of the main gathering platform, the Asset has decided to implement an innovative well
production allocation methodology. The methodology overcomes the absence of proper measurements at the test facility,
decreases the allocation uncertainty and identifies the best and worst performing wells. In 2005, the Asset Operation
Department of PEMEX decided that all the operational data should be integrated and stored in a dedicated operational data
base to be accessible by all engineers involved in the surveillance of the AILT fields. The Asset Operation Department also
evaluated the design and implementation of an Advanced Wells Monitoring solution to meet the following requirements:
• Data quality control.
• Business rule-based alarms.
• Trend violation.
• Monitoring calculation parameters.
• User notifications using automated workflows.

The common practice of the operation engineers at month’s closing was to copy the daily and monthly production data from
the operational database into their own Excel worksheets to perform the field to well allocation. Regardless the existence of a
central data source deployed in the first steps of the asset solution, the operation engineers were still consuming a significant
amount of time performing this monthly allocation exercise manually.
The solution specification of the Asset Operation Department states that the efficiency of the processes need to be improved
by considering the implementation of a more rigorous back allocation procedure which should be aligned with the asset objective
to decrease the uncertainty of its production reconciliation. The fundamentals of such a design was to develop a procedure to
reconcile the daily well operational conditions and production facility measurements and leveraging both investments. By
convergence both levels of information, an improved understanding of the field production streams is achieved and analysis
conflicts are avoided.
The final request of the Asset Operation Department was related to accuracy and automation of the back allocation process.
Realistic flow rates needed to be assigned to each well using its latest operational conditions which are calibrated with mobile
multiphase measurement units. Today, this represents the most effective cost/benefit solution to minimize the allocation process
uncertainty.
From an overall solution design to improve the data management by centralizing real-time, daily, monthly and sporadic data
and creating a tighter relationship between daily well operational conditions and facility measurements, the two main design
requirements were:
• Integration of the Information.
• Automation of back allocation.
SPE 127924 3

Proposed solution
Production back allocation is a process which takes the total field production and prorates and distributes that to the contributing
wells based on each well’s estimated production. The complexity of the allocation can range from simple single battery
allocations up to field-wide multi flow networks comprising multiple plants, gathering facilities, and wells.
In most cases the estimated performance of the wells is based on well tests, well events, polynomial flow characteristic equations,
and/or fluid phase compositions.
The specifications of an efficient allocation solution are:
• Oil, gas, water phase allocation on daily and monthly basis considering down-time of all facilities involved
in the production network.
• Generation of accurate reproducible results used to create regulatory and partner reports.
• Data export for integration with legacy accounting systems for revenue accrual, joint interest billing, and
reserves calculation.

In addition to the standard back allocation specifications, the designed solution enabled the definition of various control tasks,
such as well monitoring, accurate well potential calculation, opportunity identification, trends monitoring in order to prevent
undesired events and task automation integrating different disciplines (reservoir and production engineers) in order to avoid
deviation from the Asset production forecast.
The deployed operational data base and the designed automated workflow provide the capability to the Production engineers
to “screen” data from different sources and frequencies in one single time scale and synchronize operational events such as
downtime related to well or any production facility of the asset. Various publications recognize the fact that “one single version of
the truth” is to be applied globally for monitoring and analyzing the field production in order to be a valuable diagnostic platform
for the Design and Operation Departments of the asset.
In order to streamline the process and to achieve the expected automation, the AILT asset had to rely on a unique platform that
could cover the workflow automation functionality without depending on external applications that the engineers used to generate
several modified versions of the original data. Data interpretation or data modification concerns are especially true when it is
related to well real time and well test data. When integrated and related to field monitoring such information can drastically help
the engineer to handle and to optimize the operations of an asset in a timely manner.
The AILT asset has been instrumented with basic equipment such as controllers, recorders to measure the temperature,
measure the pressure at the bottom and at the surface of the well, as well as measure the temperatures and pressures at the
production facilities level. The large quantity of data generated by these measurements makes the manual handling and
management impossible. An example of this is where the pressure at the bottom is extracted from a historian with pressure
transient data and other measurements of high frequency of the well are not used in the analysis.
In the majority of off-shore cases the data acquired by a SCADA (Supervisory Control And Data Acquisition) system is stored
in a so-called data historian. The high-frequency data in a data historian is usually not stored for longer than one and half months.
Therefore it is important to determine how to treat this data for long term storage, since it describes the historical behaviour of the
well. The historical behaviour of the well is important for trend analysis of the well, but also for the field as a whole. In general
the high-frequency data is transferred from the off-shore data historian to an on-shore location. The challenge is how to manage
the high-frequency data, since it is of vital importance to the work of the engineer, but it consumes a large number of resources of
the system using this data. Other references exist in the literature that they speak about the same thing but have not joined with the
same goal.
Most applications available in the market today are not very apt in handling multiple parameters where high frequency data is
mixed with low-frequency data. Usually the data is manipulated in order to view it on the same time scale. One example of this is
where high-frequency pressure data needs to be viewed along side daily production and/or periodic well tests. The Fig. 4 shows
the architecture implemented to data base level of information and connections to use different developed processes.
The daily workflow at AILT calls for an integration of data at various frequencies in order to monitor the conditions of the
wells closely to avoid reduction of performance of the wells or unexpected closures. Another advantage of the combined
monitoring of data at different frequencies is the ability to monitor the influence of injection on production wells, detect sealed
faults and effects that are quantified after months of having begun this type of process and in general uncertainties related with the
administration and reservoir management to the asset. The Fig. 5 represents the general workflow solution for the application of
the process for the accomplishment of the daily back-allocation, reports and detailed solution.
AILT has established an automated process to estimate the production of the wells using the latest operational conditions of
the wells. This process covers three levels:
• Measurements at the main collection platform (ABK-D).
• Fields and,
• Wells.

a. Data Flow: The controllers are installed in the well and in the surface facilities to measure several parameters and to
show the conditions during daily operations in order to support the engineers in their decision making process. The main
parameters measured at different frequencies are:
4 SPE 127924

• Bottom Hole Pressure (BHP).


• Bottom HoleTemperature (BHT).
• Well Head Pressure (WHP).
• Well Head Temperature (WHT).
• Choke size.
• Pressure and temperature measurements from flow lines leaving and entering platforms.
• Oil and gas sales volumes.
• Production rates (oil, gas and water).
• Flow meter rates, well test rates, etc.

b. Integration of Data: The first step to have access to the above mentioned data at high frequencies starts with a link
between the data historians and a long-term data storage hub (Data Hub). The Data Hub is a platform that aggregates the
high-frequency data from the historian to 15 minute intervals or higher. The automatic acquisition of data is realized by
sampling the data in real time using SQL (Simple Query Language) interfaces and is subsequently stored at a predefined
acquisition interval. During the acquisition of the data, the high frequency is turned into manageable chunks and saved.
This process consists of a series of steps as depicted below:

• Data cleansing: The process of eliminating data that is marked as “noise”, i.e. usually data that is out of range of
pre-defined acceptable values. Reduction of noise in the data can be achieved by using statistical calculations which
can help to reduce the quantity of information. Samples of high frequency can introduce small fluctuations that can
be reduced taking an average of consecutive values of the same series. The cleanliness of the data is achieved by
setting upper and lower limits and removing highs or lows not inside the set boundaries. This method is simple and
removes up to 80 % of erroneous data in the majority of the cases. Aggregation of information (from sub-second to
15 minutes), and event detection, compares the real measurement against expected values or limits established by
the engineer. For example comparing the measured rate against the calculated rate based on a theoretical well
model. If the value is outside of the pre-established ranges the system automatically generates an alarm in this
situation. The alarm notification of the affected wells is sent to the engineer by e-mail.

• Rules: are used to detect and to eliminate erroneous data due to the fact that they are
out of bounds, mistakes of sensors or in the transmission. The results can be used to generate alarms, update
statistics and perform preventive maintenance of sensors. Such conditions as presence or absence of conditions can
be certain. The preparation of the data using data cleansing rules leads to an improved quality of the high-frequency
data before it is used in statistical operations.
• Mathematical calculations: are used to clean information. For example: obtain the difference between the well
head temperature (WHT) and the ambient temperature in warm weather. Important sources of information can be
emphasized. Of the three levels mentioned before the main uncertainty comes from the wells. To reduce this
uncertainty the process uses the Gilbert(1) equation to estimate the flow rate of the well. The equation uses the
operating conditions obtained by multiphase measurements. The use of this equation almost represents the effective
cost / benefit to minimize the level of uncertainty of the well’s production. Furthermore due to changes in operating
conditions, in particular when the flow of the well changes from critical to sub-critical, AILT had the need to
implement an additional correlation to estimate the flow rate. The Gilbert equation is only valid when the well is in
critical flow; there are many theories about the range for critical and sub-critical flow, i.e. the most common ratio
used is when the upstream pressure is at least twice the downstream pressure; but many times the users can try with
the real data and could get their own range for the transition from critical to sub-critical flow for the AILT asset the
ratio is about 0.65, for example the Fig. 6a show how to identified the range between critical and sub-critical well
behavior, the colors define different wells and the red line defines the tolerance ranges . When the condition is not
met into critical flow or into the range, the well is in sub-critical flow and a different equation needs to be used. This
is then used to determine if a well is in critical or sub-critical conditions. For sub-critical flow conditions the
Ashford & Pierce(2) correlation is used and this correlation emulates the behaviour of the well the best based on
empirical tests. The Fig. 6b shows the resut to identified patterns in order to get the new flow rate stimation.
Adittionally, although the empirical correlations gives good results and can be very good to solve some
practical production issues, like rate allocation and the deviation is not greater than 3-5 %, the AILT asset has been
implement the use of neuronal network models in order to estimate the best flow rate for each well with all of
operational information getting from the meters and this way reduce the uncertainty in the flow rate allocation.
SPE 127924 5

After setting up the above mentioned steps, incorporating the experiences and knowledge of the engineers and thoroughly
testing it, the whole process is now fully automated. The engineers do not need to interfere with the acquisition of data because it
is automatically transferred to his desktop and do the back allocation automatically, aplaying the specific workflow, as is show in
the Fig. 7a and 7b. The integration of the data provides all the relevant means to manage the asset holistically (operations,
production and reservoir interests) from the same base of information. All disciplines within AILT have access to the same data,
which can then be used in their own specialized applications.

c. Automation: This role in the alertness and reservoir monitoring is about automation not necessarily it implies that the
automated deposit and that manages for if same without the intervention of the engineer. In oil production operations the
well known 80/20 rule is applicable: 20 % of the work of the engineer generates 80 % of the value. However areas exist
where the management of data takes 80 % of the work and delivers only 20 % of the value. This is the place where
automation plays an important role by moving from 80 % loss to 80 % value.
The engineers are enabled to define rules and set up models in order to detect events. Several methods and different
graphs are available for the monitoring of deposits and wells. Several interfaces provide additional and complementary
capabilities:

• Generation of Rules: This functionality allows the engineer to define process and business rules. Predefined
functions are available in order to create rules, ranging from simple to complex, without the need to write the rules
from scratch. The integrity of the rule can be evaluated in advance. The rules can be applied to a single entity, i.e.
completion, well, tag, sensor, etc., or it can be applied to a class of entities or a certain type of parameter. For
example all pressure measurements from sensors.
• Automatic Task Planner: This functionality allows the engineer to schedule tasks to run at a pre-defined interval,
for example once a day in the morning. In addition it is possible to active tasks by other tasks or based on certain
events in the system. The structure and dependency of all the automatic tasks will be shown in a list where the
engineer can apply filters and/or classifications to find commonalities.
• Alarms and Notification: Tasks can write their results as notifications which can be used to increase the monitoring
and responsiveness to occurrances in the field.

The second part of the automation deals with the comparison between the measured data and the calculated data based on
historical trends and models of the different fields. For example: Decline curve trends, well models, numerical reservoir models,
relationship between injection and production, etc., as is show in Fig. 8.
Another example is to be able to compare the measured data, the estimated production and the simulation results for
production and verify them with operational conditions. Based on this alarms can be established which can then be applied to the
simulated data. The application will also need to be able to calculate and monitor KPI’s (Key Performance Indicators) for
reservoirs, fields and wells. The KPI’s, as pre-defined by PEMEX, not only apply to the performance of the fields, but also to
economic profit and losses.

Results
AILT has invested significantly in the automation of the various facilities with SCADA systems to allow for near real-time
monitoring of their fields. The result is that the automated system enables the engineers to surveillance all operating conditions
at high-frequencies, look at different operational alarms and instantly detect sudden change in the behaviour and trends of the
fields. One example of the advantage of the new automated workflows is that it is possible to detect a common problem like
scaling into the choke, as is show in the Fig. 9. These kind of problems are now easier to predict and can be corrected on time
before the situation escalates.
At this point not all the facilities in AILT have been automated. For this reason AILT has developed an advanced process
to monitor those facilities that do not have high-end data collection systems. Specific operational business rules have been
defined to track anomalous behaviour which the wells and facilities incur on a day to day basis. These rules minimize the
response time required to react to the various problems that can arise, thereby getting better results by preventing downtime or
diminished production of the wells.
With the newly developed process KPI’s have been defined to monitor behavioural trends whereby the real conditions and the
simulated behaviour using the operational conditions can be compared, for example the Fig. 6a show how to identified the range
between critical and subcritical well behavior.
The advanced monitoring allows for the definition of alarms to identify situations in wells such as:
• Critical Flow Conditions, Fig. 6a.
• Subcritical flow conditions, Fig. 6a.
• Pressure variations in the well.
• Temperature variations at the wellhead.
• Changes in the choke diameter.
• Validation of well tests against theoretical models.
6 SPE 127924

In addition the newly developed workflows allow the operational production team to establish the following:
• Better and faster visualization.
• Real time well surveillance and monitoring.
• Well performance monitoring (KPI).
• Quick decisions using large data sets.
• Use data to its full potential.
• Identify problematic wells.
• Capitalize investment in SCADA systems.
• Integrate different sources and frequencies.
• Create the bridge between high frequency data and the reservoir.
• Automatic and manual quality control of SCADA data or daily captured data.
• Event predictions.
• Create multiple production scenarios.
• Estimation of production based on real data.
• Decline curves.
• Integrate people and processes.
• Eliminate surprises.
• More time available for analysis and optimization.
• Allocation of data.
• Historical storage.
• Official reports.

Today it is possible to conduct well surveillances on time as depicted in Fig. 10a, where it is showing the template used for all
measurements at the well, platform and production lines. In addition to this data mining techniques like Self Organizing Maps
(SOMs) and Neural Networks are used to transform vast amounts of data into an easy to understand context. SOMs were
developed by Professor Teuvo Kohonen and are so-called unsupervised learning neural networks. This analysis technique is used
to take all the historical inputs, like temperature, pressure and flow rate, of the wells and determines commonalities or groups
among those inputs. The results are then shown in a 1 or 2 dimensional color plot, with which it is possible to determine which
operational conditions are optimal for each well to produce, as is show in Fig. 10b.

Conclusions
• The efforts put into developing this automated workflow also bore its fruits with other departments such as
Reservoir, Evaluation, Completions and Drilling. The improved workflow allows for multi-disciplinary workflows
to improve the decision making process within the aforementioned departments. This improvement is realized by
feeding the results into a tool called Integrated Asset Modeler with which producing assets and fields can be
managed by performing scenario analysis and ecomonic analysis. Based on these scenarios the best technical and
financial option can be chosen for the Asset. The much improved workflow also allows the engineers to shift their
focus from collecting and validating data to their real work of analyzing and mining the data and thinking of ways to
improve the production of the wells.
• The improved workflow also allows for making better decisions in irregular behaviour. For example: Now it is
easier to predict when a well will shut down due to operational conditions and it is easier to make decisions when to
schedule physical inspections of the equipment, thus preventing economic losses by preventing unscheduled
production losses. This has a positive impact on the financial statement and gross volumetric production of the
AILT asset. Based on the improvements mentioned in this paper, production losses of 1,900 BPD (or an estimated
$1,087,000 per day) have been prevented due to increased wellhead pressure and calibrating the variable chokes on
a regular basis.
• The system is a tool that adapts to the needs of the engineering team to facilitate the decision making process in a
dynamic way. By concentrating the scattered production data in a single database, avoiding recapture of the data
and providing increased reliability, the response time has increased up to 60% due to automation of the workflow.
• As of today the interface between the automated workflow and PEMEX official production system, SNIP (Sistema
Nacional de Información de Producción), is pending approval. This interface will automate the transfer of data
between the two systems, thus avoiding manual duplication of data.

Acknowledgment
We are grateful to Pemex and in particular to the management team of Activo Integral Litoral de Tabasco (Operations and
InformationThechnology Departments) for allowing publication of the data included in this paper. We wish to thank the
implementation team for their efforts in materializing the idea of an automatic surveillance system using a combination of manual
and automatic data capture and advanced well monitoring.
SPE 127924 7

Nomenclature
API = American Petroleum Institute
Bo = Oil formation factor volume, bbl/STB
C = Constant, Dimensionless
GOR = Producing gas oil ratio, scf/STB
GLR = Producing gas liquid ratio, scf/STB
JSK = Jurassic Kimmeridgiano formation
K = Specific heat ratio
KM = Medium Cretaceus formation
KS = Upper Cretaceus formation
OC = Operation conditions
p = Pressure Abs, Kg/cm2
p1 = Upstream choke pressure, Kg/cm2
p2 = Downstream choke pressure, Kg/cm2
q = Flow rate @ standard conditions, STB/d [MMcf/d]
Rs = Solution GOR at p1 and T1, scf/STB
T1 = Upstream choke temperature, °C
TCR = Tertiary formation
WOR = Producing water oil ratio, STB/STB
X = Choke downstream to upstream pressure ratio, p2/p1
Z1 = Gas compressibility factor @ P1 and T1

Greek symbol
α = Ashford & Pierce equation parameter
β = Ashford & Pierce equation parameter
Ф = Choke diameter (64ths in)
γ = Specific gravity at T1 and P1, (dimensionless)

Subcripts
A&P = Ashford & Pierce
G = Gilbert
g = Gas
h = Head
L = Liquid
o = Oil
w = Well

References
1. Gilbert, WE:”Flowing and Gas Lift Well Performance”, API D and PP (1954), 126.
2. Ashford, FE and Pierce, PE: ”The Determination of Multiphase Pressure Drops and Flow Capacities in Down-Hole Safety
valves (Storm Chokes)”, SPE 5161, Presented at SPE Annual Fall Meeting, Houston, Texas, oct 1974.
3. Jeff Holland, Murphy Exploration & Production, Christian Oberwinkler, Michael Huber, Georg Zangl, Schlumberger
Information Solutions. Uitilizing the value of continuosly measured data, SPE 90404.
4. Software used Pipesim*, Decide!*, ADM*.
8 SPE 127924

Fig. 1 - Map of Mexico illustrating the geography the Litoral de Tabasco asset

Fig. 2 - Growth curves of the following & status in the project


SPE 127924 9

Fig. 3a - Problem definition, different well head pressure

Fig. 3b - Problem definition, error in Back Allocation and losses


10 SPE 127924

Fig. 4 - Architecture solution

Fig. 5 - General workflow solution


SPE 127924 11

Fig. 6a - Well range between critical and sub-critical flow

Fig. 6b - Well pattern to get the new flow rate estimation


12 SPE 127924

Fig. 7a - Detailed Data flow diagram for the Workflow Solution in AILT
SPE 127924 13

Fig. 7b - Detailed Data flow diagram for the Workflow Solution in AILT (Cont.)
14 SPE 127924

Fig. 8 - Examples of behavior and trends of wells belonging to same platform

Fig. 9 - Example how proactive decisions help to avoid production losses when the choke is plugged by scale
SPE 127924 15

Fig. 10a - Well monitoring for well control using different operational data

Fig. 10b - Well optimization for well control using different operational data
16 SPE 127924

Appendix 1
Gilbert Equation(1)

q L GLR 0.546
CG =
p wh φ1.89

Appendix 2
Ashford and Pierce Equation(2)

qL
CA&P =
1.9706 φ 2 αβ
Were:

α = (Bo + WOR) −0.5


0 .5
⎡ ⎛ K ⎞ ⎛ ⎜ ⎟⎞
⎛ K −1 ⎞ ⎤
⎢ 0.00504⎜ ⎟(T1 + 460 )Z1 [GOR − Rs ] 1 − X ⎝ ⎠ ⎟ + p1 (1 − X ) ⎥
⎜ K

⎢ ⎝ K −1⎠ ⎜ ⎟ ⎥
⎝ ⎠
⎢ ⎥
⎢ ⎛ (62.4 γ o + 0.01353γ g GLR + 67 WOR )2 ⎞ ⎥
⎜ ⎟
⎢ ⎜ (62.4 γ o + 0.01353γ g Rs + 67 WOR ) ⎟ ⎥
⎣ ⎝ ⎠ ⎦
β= ⎛ −1 ⎞
⎜ ⎟

1 + 0.00504
(T1 + 460)Z1 [GLR − Rs]X ⎝K⎠

p1

You might also like