P - Intelligent Field Data Management PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

SPE 149040

Intelligent Field Data Management: Case Study


Naser, N. A., Awaji M. A. Saudi Aramco

Copyright 2011, Society of Petroleum Engineers

This paper was prepared for presentation at the SPE/DGS Saudi Arabia Section Technical Symposium and Exhibition held in Al-Khobar, Saudi Arabia, 15–18 May 2011.

This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material, as presented, does not necessarily reflect any position of the Society of Petroleum
Engineers, its officers, or members. Papers presented at the SPE meetings are subject to publication review by Editorial Committee of Society of Petroleum Engineers. Electronic reproduction,
distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to reproduce in print is restricted to an abstract of not
more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of where and whom the paper was presented. Write Librarian, SPE, P.O. Box
833836, Richardson, TX 75083-3836, U.S.A., fax 01-972-952-9435.

Abstract
Real time represents the backbone and the outmost critical component of the intelligent field implementation in Saudi Aramco.
Over the past few years, several guidelines and standards have been established to govern the real time data flow from fields to
users’ desktop. These guidelines and standards are defined to meet intelligent field data acquisition and delivery requirements
for reservoir and production engineers. Saudi Aramco internally initiated an assessment to study the current real time
management practices in order to tap and leverage on the existing guidelines as well as propose possible enhancements that
will add value to the intelligent field implementation.

This paper discusses and assesses, through a case study, the intelligent field real time data management steps in the intelligent
field data management implementation in Saudi Aramco which includes: Data standardization, Data filtration and
compression, Data quality control, Data quality and transmission key performance Indices, Data visualization, and Systems
reliability and availability. Also the paper highlights the challenges in intelligent field real time data management to meet
intelligent field data acquisition and delivery requirements. Finally, recommendations are proposed to improve real time data
management.

Introduction

The rapid evolution of technological advancements in the intelligent field equipment, real time monitoring and optimization
over the past few years has contributed significantly in increasing production potential, recovery factor, and efficiency with the
most safe and environmentally sound practices. Saudi Aramco business plan has provisions to install real time monitoring
systems in the majority of the wells in Saudi Aramco. That is thousands of wells with tens of thousands of sensors that
generate millions of data points. Thus, a robust and reliable real time data management must be in place.

Real-time Intelligent Field Data Management is a relatively new, complex, multi-departmental and very expensive undertaking
at Saudi Aramco. Information is being gathered from multiple sources deep inside oil and gas wells, at the wellheads, at flow
meters and from points along the flow lines. The effort is collectively known as Saudi Aramco’s Intelligent-Field program
(AbdulKarim 2010)1.

Successful intelligent field implementation is based on a robust data acquisition and delivery infrastructure which is the
backbone component of Saudi Aramco’s intelligent field surveillance layer. This infrastructure consists of two major sub-
layers. The first one is field instrumentations systems and communications. The other major and equally important sub-layer is
the real time data management. Saudi Aramco has developed and implemented a set of standards, specifications, and
guidelines to address and meet intelligent field requirements in terms of data types, frequency, resolution, security, integrity,
quality, and reliability that should be implemented in both sub-layers.

The instrumentations system and communications sub-layer has been comprehensively assessed in (Almadi 2010)2 for
Khursaniyah field. This paper focuses on discussing the assessment of the other sub-layer which is the intelligent field real
time data management for Khursaniyah field. The paper provides a brief overview of the data management steps which
include: Data Standardization, Data filtration and compression, data quality control and reporting, data access and
visualization. Also the paper highlights the challenges in I-field real time data management to meet intelligent field data
acquisition and delivery requirements. Finally, recommendations are proposed to improve real time data management.
2 [SPE 149040]

Overview of Intelligent Field Structure

Figure 1 shows the architecture adopted in the development and implementation of Saudi Aramco Intelligent Field structure. It
consists of four major layers, namely Surveillance, Integration, Optimization and Innovation. The surveillance layer provides
continuous monitoring of production information and applies data management tools and processes to ensure the usefulness of
the data. The integration layer interrogates real time data on a continuous basis to detect reservoir behavior trends and
anomalies. Reservoir engineers are alerted to such anomalies for further analysis and resolution. The optimization layer
provides streamlined full field optimization capabilities and field management recommendations. The innovation layer
preserves knowledge of events and triggers the optimization process and corresponding actions throughout the life of the field.
This is knowledge management and lessons-learned layer that captures and injects “intelligence” into the system.
(AbdulKarim 2010)1

Figure 1 Intelligent Field Development Layers

Infrastructure and Field to Desktop Data Flow

The following diagram (Figure 2) depicts the infrastructure set-up currently being used in the intelligent field implementation.

Figure Error! No text of specified style in document. Intelligent Field Infrastructure

The data flows from the sensors to the RTU(s) and then along the network until it reaches the Plant Information (PI) system.
PI is a real time data collection and monitoring system of software/hardware. PI is used extensively in the plants to monitor
data and has been extended to monitor streaming well data. As seen in the diagram, PI operates at several levels within the
network. There are collection points in the GOSPs, area servers, and finally into a server called Dhahran PI Cluster server (also
[SPE 149040] 3

known as PI Central). Users can actually tap into the PI system at any level to monitor data.

But the data collection that E&P is most interested in is the one coming from PI Central to the real time database. That is
shown in the drawing as an orange dashed line. The second database shown in the diagram, to the right of the “filter” icon,
represents the first step in transforming data to information in the data warehouses.

Data Management Steps

The main data management steps that are implemented in the intelligent field program are:
• Data standardization
• Data filtration and compression
• Data quality control
• Data quality and transmission key performance Indices
• Data visualization
• Systems reliability and availability

Data Standardization

Data standardization is the first and most critical step in any automated data analysis procedure. This step involves the
standardization of the following: Data identification, Data Types, depth reference, time stamping and time zone, and unit
definition.

It is very important to apply proper naming conversions to help in data identification. In the field, instrumentations and
equipments are referenced or labeled by their associated tag numbers. Tag naming Standardization and Configuration
Management Guidelines has been developed to unify tag naming standard to be used in all PI systems across Saudi Aramco.
This standard is an extension to Saudi Aramco Engineering Standard Instrumentation and Identification SAES-J004. (Al
Dhubaib 2008)3

Moreover data types and unit of measurements have been standardized. For each sensor, the type of the readings and the unit
for the measurement must be defined according to the standard. For example, the permanent downhole monitoring system
(PDHMS) has two sensors. One sensor is generating pressure readings in PSIG (pressure is the data type and PSIG is the unit)
and the other sensor is reading temperature (temperature is the data type and F is the unit). Also, the depth reference of the
PDHMS should be clearly defined (e.g. is it MD or TVD?).

Data Filtration and Compression

Automated reservoir surveillance requires the collection of engineering data in every units of time, sometimes as low as every
second. As a result, tens of thousands of bits of data can be collected every hour. There is therefore a need to reduce the size of
this data for efficient storage and utilization. Two different strategies for filtration and compression have been set which are:
Data Summarization, and Data Decimation.

• Data Summarization
Data summarization is the reduction of the data resolution of data and hence a significant decrease in the amount of
data being kept in expensive, online storage. A filter is applied over a certain time interval. In this case, filtration is
carried out by taking the hourly and daily average of the validated raw data.

This reduction of data is important to enable analysis and visualization systems, which do not need high resolution
data, to fetch a meaningful data efficiently. However, for other applications that need high resolution analysis, such as
the analysis of the pressure build up test, summarization does not work since the level of detail is also significantly
reduced.

• Data Decimation
Data decimation is the reduction of the amount of data without losing the details of the data. There are some types of
applications that need high frequency event to be preserved. In this case, the data decimation must be efficiently
applied in order to achieve as much as necessary accurate results.

For this purpose, the Intelligent Field Data Acquisition and Delivery Guidelines have been developed (table 1). The
determination of these guidelines was developed through consensus of representatives for various departments
including production and reservoir management engineering (Al Dhubaib 2008)4. These guidelines induce the
4 [SPE 149040]

utilizing of an algorithm that can reduce the amount of data being transmitted by 70% (Al Dhubaib 2008)3 by
reducing the level of details. This algorithm ensures that high frequency events are not deleted. The algorithm uses
the following PI report by exception features:

o In engineering units, if the difference between two successive values that fall in a pre-defined window width
is greater than the defined resolution (dead band) then store it.
o In engineering units, if the difference between two successive values that fall in a pre-defined window width
is less than the defined resolution (dead band) then ignore it.
o If the maximum window width (required frequency) is reached with no significant value to be stored then
store the last value.

Table 1 Intelligent Data Acquisition and Delivery Guidelines

For example, let’s look at figure 3. Values A, B, and C are a successive values generated from a pressure sensor. A is
the last reported value. Based on the algorithm applied, B falls within the dead band whereas C falls outside the dead
band. Thus, value B is ignored and value C is stored. In case if all the values fall within the dead band till the
maximum window width, the last value is stored.

Figure 3 Window Width and dead band illustration


[SPE 149040] 5

Data Quality Control

Data quality consists of the following steps: out of boundaries and spikes removal, gap filling, noise reduction, and logical
check. Out of boundaries and spikes removal is very basic but nevertheless is very powerful technique. It removes the
unrealistic values that fall outside the engineering reservoir limits. Using this technique, it is believed that 80% of the wrong
production data can be identified. (Holland 2004)5

Gap filling is to fill small gaps in occurring in the data. Although the measurement is recorded in a specified frequency,
missing data points always occur. This is usually due to communication or instrumentation failure (Mathis 2007)6. Noise is
unwanted electrical or electromagnetic energy that degrades the quality of data. It is usually represented by data points
scattered around the real trend.

Logical checks are the correlation between different sensors readings as well as physical boundaries that exists. For example,
when the well is shut-in, the oil rate that is measured by the Multi Phase Flow meter (MPFM) should measure 0 otherwise
there is something wrong.

Data Quality and Transmission Key Performance Indices

This step involves generating Key Performance Indicators (KPIs) to measure and assess the quality of the data as well as the
data acquisition and delivery process. Such reports enable the users to assess the quality of the data and to audit the
performance of different data provider.

Data Visualization

To provide basic visualization environment capable of monitoring all aspects of the objective function related to the assets
(i.e., a well, a field, an operation and entire asset) in real time to ensure that performance is being met by providing the
required alerts, basic trending and analysis tools.

Systems Reliability and Availability

The following are the Intelligent-Field guidelines for the systems (hardware/software) reliability and availability (Al Dhubaib
2008)4:

• All real time data delivery systems should be available 24 hours per day, 7 days a week.
• Data delivery reliability should be 99% for visualization and 99.9% for all alerting and alarming.
• Data should be backed up at all level. There should be no data loss due to the system downtime and all data should be
recovered upon problem resolution.
• Problem resolution time should be no greater than 4 hours.

Case Study for Khursaniyah Field Data Management

Khursaniyah Field was chosen to be the baseline for this exercise as it is considered one of the latest intelligent fields that
came online. In addition, this assessment is a continuation of the instrumentations system and communications sub-layer
which has been comprehensively assessed in (Almadi 2010)2.
6 [SPE 149040]

Field to Desktop Data Flow

After the real time data crosses the field instrumentation network, real time data traverses through multiple stages of PI servers
before it reaches the E&P upstream database where E&P applications make use of the data. The first level is PI Interface, and
then data is collected in area level, and last is the central level which finally pushes the data to E&P upstream database. Figure
4 depicts the data journey.

Figure 4 Data Flow for Khursaniyah Field

Assessment of the Data Management Steps

Data Standardization

Intelligent field Tag Naming Standard is only applied in PI central and E&P upstream database. As depicted in figure 4, data
traverses through multiple stages of PI servers. At every stage a local standard is used to name and configure the PI tags.
Different naming convention was appended to the tags at the various levels, and each organization has its own convention. As
a result different names were used for the same data point at various stages of the data collection in PI servers.

Moreover, the configuration process involves significant human interventions which increase the possibilities for
misconfiguration. For example, this may lead to misconfiguration of the pressure data type to be a temperature data type and
vice versa. Another example is the definition of the unit of measurement may vary in different PI servers because of
misconfiguration.

Data Filtration and Compression

• Data Summarization
A nightly batch job runs daily to summarize the previous day data into hourly and daily summaries. One clear
limitation is that engineers are not able to see summarized data until the next day. As a result, visualization and
analytical systems will not access summaries in real time, thus engineers loses the opportunity for right time decision
making.

• Data Decimation
The Intelligent Field Data Acquisition and Delivery Guidelines were implemented in Area PI as well as central PI but
with different resolutions (dead band) and frequencies (window width) which led to either the level of detail is not
preserved or more unnecessary details are preserved. For instance table 2 shows the Data Acquisition and Delivery
Guidelines for wellhead pressure. The guidelines state that window width must be 60 sec and the dead band is ±0.01
psi. However, the dead band adopted in Area PI is ±0.5 and the window width is 600 sec. The effect is the level of
detail required by the engineers is not preserved by changing the dead band and window width to 600 sec and ±0.5
psi respectively.
[SPE 149040] 7

Table 2 Data Acquisition and Delivery Guidelines for wellhead pressure

Table 3 is another example of the choke positions values. The guidelines state the window width is to be 86400 sec
which is one value per day and the dead band is on-change. However, the dead band adopted in Area PI is ±0.25 and
the window width is 600 sec. The effect is more preserved details than required. This led to more data storage
requirements than needed.

Table 3 Data Acquisition and Delivery Guidelines for Choke Position

Data Quality Control

During the summarization job, some simple quality checks are performed before generating hourly and daily readings. It
removes out of boundaries and spikes which are defined by engineers. Also by averaging the data, it denoises noise data to
reveal the real trend. The high and low values establish the expected range of the data for normal operations. They are used to
remove any questionable data points prior to averaging for the day. If the high/low data values are not entered, then all data
points for the day will be used for averaging. The data summarization job will count the number of points out of range vs.
points in range to establish a confidence factor for the day. The confidence factor is used to help users to either trust the
summaries or not. However, there are no advance validation logics implemented to real time data stream.

Data Quality and Transmission Key Performance Indicators

There are a couple of reports that enable users to assess quality of the data and to audit the performance of different data
providers. One KPIs report is to provide an overview of the equipments’ data quality and availability. This is done by
examining the data that are coming from these equipments. Figure 5 shows a snapshot of the generated KPIs.

Another report is data quality check (QC) report for the data summarization job. It performs quality checks on the result and
report various errors that may occur in resulted data during data summarization. For example, the water cut must be between 0
and 100. Figure 6 shows a sample report.
8 [SPE 149040]

Figure 5 Equipments Data Quality and Transmission KPIs

Figure 6 Sample QC report for data summarization job

Data Visualization
The current visualization systems of the real-time and summarized data of Khursaniyah field provides production and reservoir
management engineers with the functionality to monitor, diagnose, and make decisions about their wells, reservoir or facility
performance. It provides engineers with different views combining rates and measured or calculated pressures response for oil
producers, water injectors, and water supply systems at various level of detail.

The visualization systems have the ability to monitor the overall reservoir performance. This view can help engineers to
identify quickly any deviation from strategy rate guidelines by highlighting in different colors wells that being operated
beyond their recommended strategy. Moreover, engineers can visualize the current well surface and sub-surface performance
(figure 7) in addition to many other features. (Al Malki 2009)7
[SPE 149040] 9

Figure 7 Current Performance Parameter View

Systems Reliability and Availability

For the reliability and availability compliance the following shows Khursaniyah field current compliance with these rules:
• Data servers being in one physical location do not fully support Business Continuity.
• If a failure between SCADA and PI server occurs, there is no automatic mechanism to recover the lost data and there
is no redundancy. Only manual intervention is possible.
• There are no backup standards defined at the different nodes.
• Current problem resolution time does not satisfy the 4 hours requirement due to the nature of system recovery.

Conclusion

Recent Intelligent Field implementations have significantly optimized Saudi Aramco upstream operation with great values
captured and reported. Decisions that used to take several months are now taken few hours by utilizing intelligent field
technology. This paper shows the importance and criticality of intelligent filed data management. The key findings and
recommendations of this paper are as follow:

• An automated process and workflow should be in place to ensure data resolution and data sampling frequencies are
adopted in all layers. Moreover, it should ensure the tag naming standard is also met. This system should eliminate
the significant human interventions and manual processes along the data transmission infrastructure which could
increase the possibilities of errors and data flow discontinuity.
• Advance data validation and qualty controls should be implemented in the real time data stream level to be used in
data analysis. Engineers are the key contributors for successful data validation implementation and advance data
diagnostics and surveillance.

Nomenclature

GOSP Gas Oil Separation Plan


RTU Remote Terminal Unit
PI Plan Information
MD Measure Depth
TVD True Vertical Depth
PDHMS Permanent Down hole Mentoring system
MPFM Multiphase Flow Meter
QC Quality Check
10 [SPE 149040]

Acknowledgements
The authors would like to express their appreciation of Saudi Aramco management for permission to publish this paper.

References

1. AbdulKarim, A., Al-Dhubaib, T., Elrafie, E. and Alamoudi, M.O.: “Overview of Saudi Aramco’s Intelligent Field
Program,” Paper SPE 129706, presented at the Intelligent Energy Conference and Exhibition, Utrecht, the
Netherlands, 23-25 March 2010.
2. Almadi, S, Al-Dhubaib, T, AhmadHusain, S, Al Walaie, F, Al Khabbaz, O, Alaidarous, O, and Al Temyatt, S: “I-
Field Data Acquisition and Delivery Infrastructure: Khursaniyah Filed Best in Class Practices: Case Study” Paper
SPE 128659, presented at the Intelligent Energy Conference and Exhibition, Amsterdam, the Netherlands, 25-27
March 2010.
3. Al Dhubaib, T.A., Issaka, B.M., Barghouty, M.F., Mubarak, S., Dowis, A.H., Shenqiti, M.S. and Ansari. N.H.:”Saudi
Aramco Intelligent Field Development Approach: Building the Surveillance Layer.” SPE paper 112106, prepared for
presentation at Intelligent Energy Conference and Exhibition, Amsterdam, the Netherlands, 25-27 February 2008b.
4. Al Dhubaib, T.A., Almadi, S.M., Shenqiti, M.S., and Mansour, A.M: “I-Field Data Acquisition and Delivery
Infrastructure: Case Study” SPE paper 112201, prepared for presentation at Intelligent Energy Conference and
Exhibition, Amsterdam, the Netherlands, 25-27 February 2008a.
5. Holland, J., Oberwinkler, C., Huber M., Zangl G.: “Utilizing the Value Continuously Measured Data” SPE 90404,
presented at the SPE Annual Technical Conference and Exhibition held in Houston, Texas, USA, 26-29 September
2004
6. Mathis W., Thonhauser G.: Mastering Real Time Data Quality Control –How to Measure and Manage the Quality of
(Rig) Sensor Data” SPE/IADC paper 107567, presented at the SPE/ IADC Middle East Drilling Technology
Conference & Exhibition held in Cairo, Egypt, 22-24 October 2007
7. Al Malki, S.S., Buraikan, Meshal M, Abdalmohsen, R.A., Wolfe, W. A., Harbi, A. A., Shaik, A.A.: ”I-Field
Implementation Enables Real Time Reservoir Management of Newly Developed Saudi Fields” OTC 20125,
presented at the 2009 offshore Technology Conference held in Houston, Texas, USA, 4-7 May 2009

You might also like