Download as pdf or txt
Download as pdf or txt
You are on page 1of 56

Real-Time Answers to Well Drilling

and Design Questions

William Borland
Bangkok, Thailand

Crucial decisions related to drilling safety and casing placement

Daniel Codazzi
Kai Hsu
John Rasmus
Sugar Land, Texas, USA

while drilling can help drillers execute optimal well plans by offering

Chris Einchcomb
British Petroleum
Ho Chi Minh City, Vietnam
Mohamed Hashem
Shell Offshore
New Orleans, Louisiana, USA
Vaughan Hewett
Enterprise Oil plc
London, England
Mike Jackson
British Petroleum
Stavanger, Norway
Richard Meehan
Cambridge, England
Mike Tweedy
Chevron Petroleum Technology Co.
Houston, Texas
For help in preparation of this article, thanks to Didier
Belaud, Schlumberger Wireline & Testing, Montrouge,
France; Steve Chang, Pascal Panetta and Dave White,
Anadrill, Sugar Land, Texas, USA; Nader Dutta, BP,
Houston, Texas; David Leach, GeoQuest, New Orleans,
Louisiana; Masahiro Kamata, Schlumberger KK Product
Center, Fuchinobe, Japan; Scott Leaney and Shoichi
Nakanishi, Wireline & Testing, Jakarta, Indonesia;
Gordon Mowat, GeoQuest, Houston, Texas; and Les Nutt
and Bill Underhill, Wireline & Testing, Sugar Land, Texas.
CDR (Compensated Dual Resistivity), Charisma, Drill-Bit
Seismic, ISONIC IDEAL sonic-while-drilling tool,
MACH-1 Seismic-Guided Drilling, and RFT (Repeat
Formation Tester) are marks of Schlumberger. TOMEX is
a mark of Western Atlas International, Inc. The development of the Schlumberger drill-bit seismic technique was
partially funded by the Thermie program of the European
Community under contract OG 046/93.

often require real-time information. Seismic and sonic data acquired

an instant depth tie to surface seismic images and by providing a


look ahead of the bit to overpressure hazards.

Efficient and safe drilling operations can be


significantly improved with real-time information relating depth to seismic travel
times, especially in wells that might
encounter overpressured formations. Knowing depth to overpressure is essential for
choice of a safe mud weight that will keep
formation pore fluids at bay and prevent an
influx that could lead to a blowout, and also
to prevent differential sticking due to excessively high mud weight. Even without the
danger of overpressure, variations of pressure and rock strength with depth can
require that well plans include numerous
casing strings.
To meet the challenge of drilling within
the desired mud-weight window while minimizing the number of casing strings, depth
to casing points must be known. Pinpointing
coring point depths can be key to following
a carefully designed well plan; missing
these can create gaps in the body of reservoir knowledge, undermining field understanding and future development. And in
any environment, knowing how much farther the bit has to go helps optimize bit use.
Successful prediction of the seismic timeto-depth relationship requires an integrated
approach to analysis of all available wellsite
data. Drilling parameters from measurements while drilling (MWD) and real-time
analysis of cuttings for lithology, gas content
and fossil indicators form a foundation from
which to draw the position of the bit relative
to the geological target on the seismic section.

Geophysical data are additional components for building a successful time-depth


prediction. Targets and intermediate drilling
hazards are often identified as reflections on
seismic images, but those reflections are
quantified in two-way seismic travel time,
not in depth, as is required for drilling predictions. If there is another well nearby, a
check shot, vertical seismic profile (VSP)
or better, a synthetic seismogram from sonic
and density logs calibrated by borehole seismic datacan help relate seismic times to
depth. In the absence of neighboring well
data, seismic times are converted to depth
using velocities estimated from surface seismic data processing. But uncertainties in
velocitieseven 5%, which is considered
good by geophysicistscan lead to depth
errors of hundreds of meters. An erroneous
depth can become part of the well plan, and
once drilling has begun, the error may be
discovered too late to save the well.
Two complementary while-drilling measurements provide real-time solutions to the
seismic time-depth problem. Seismic while
drilling and sonic while drilling have
proven, in a variety of environments, to supply velocity information in time for crucial
drilling decisions to be made. In this article
we examine the capabilities and limitations
of the measurement methods, present field
examples and discuss future directions.

Oilfield Review

Seismic While Drilling

The seismic-while-drilling techniquealso


known as Drill-Bit Seismic, seismic-guided
drilling, drill-noise VSP and TOMEXuses
acoustic energy generated by the bit as a
source for a seismic survey. As a rotating
roller-cone drill bit pounds on the bottom of
the hole, it acts as a dipole source, radiating
acoustic energy into the formation.
With most of the weight of the bottomhole
assembly (BHA) behind it, the bit is a powerful source. However, its characteristics
depend on parameters that vary, such as
weight on bit, revolutions per minute, formation properties, bit type and wear, and the
geometries of the drillstring and bottomhole
assembly. As a result, the amplitude and frequency content of the source also vary.
Most of the radiated energy is usable signal,
though noise from the rig does interfere. Seismic waves transmitted into the formation
travel up toward receivers at the earth surface
or seafloor and downward to be reflected
back to the surface from deeper strata (far
right).1 The geometry resembles a reverse VSP,
with source and receiver positions
exchanged. The source also excites axial
vibrations in the drillstring, and these may be
detected by an accelerometer mounted on
the swivel or topdrive (above right).

Summer 1997

Accelerometer
Geophones

Instrumenting the goose neck on the topdrive. The accelerometer on the swivel or
topdrive is the only addition to the drilling
setup required by the seismic-whiledrilling technique.
1. Kamata M, Underhill W, Meehan R and Nutt L:
Drill-Bit Seismic, a Service for Drilling
Optimization, Transactions of the SPWLA 38th
Annual Logging Symposium, Houston, Texas, USA,
June 15-18, 1997, paper DD.
Haldorsen JBU, Miller DE and Walsh JJ: Walk-Away
VSP Using Drill Noise as a Source, Geophysics 60
(July-August, 1995): 978-997.
Meehan R, Miller D, Haldorsen J, Kamata M and
Underhill B: Rekindling Interest in Seismic While
Drilling, Oilfield Review 5, no. 1 (January 1993): 4-13.

Drill bit

Seismic reflector

Drill bit as seismic source. Acoustic


energy (arrows) radiates into the formation,
traveling both upward to receivers and
downward to reflect off layers ahead of the
bit. Energy also travels up the drillstring.

tf -tds
Crosscorrelation
of accelerometer
and geophone
traces

2000

2250

Fo
rm
atio
np
ath
t
f

Drillstring path tds

Geophone

Correlating signals
to pull out real-time
check-shot results.
Accelerometer and
geophone signals
are processed to
extract relative
travel time between
drillstring path and
formation path. Relative travel time
added to drillstring
travel time produces
formation, or checkshot, time. All travel
times are expressed
as t.

2500

Depth, m

Accelerometer

2750

3000

3250

3500

In spite of the continuous nature of the drillbit signal, information can be extracted about
the relative travel times in the drillstring and
formation (above).2 Converting this to the
desired quantityabsolute travel time
between bit and geophone, or check-shot
timerequires knowing the drillstring travel
time (see Drillstring Acoustics, page 6).
The check-shot time, doubled to become a
two-way time, can then be used to identify
the location of the bit on a time-based seismic section. The signal processing required
for this real-time check shot can be run on a
personal computer at the rig site. As drilling
progresses, more check-shot data are
acquired, and the location can be updated.
The updated bit locations can be used by
personnel on the rig, or be communicated
via satellite, telephone or fax lines to the
office where the asset team updates depth-totarget predictions on a seismic workstation.
This time-depth correlation forms a velocity relationship that can also be used to convert surface seismic sections from time plots
to depth plots. As new velocity data become
available, some structures and targets, as
seen on surface seismic sections, may
change position (above right).3

Updating surface seismic images with while-drilling time-depth


data. As the time-depth tie evolves in real time, so can the location
of the target. In this example from southeast Asia, the surface seismic image converted to depth using an initial time-depth relationship (left) shows the target at 3352 m. Seismic data acquired while
the bit was drilling at 2911 m improved the time-depth relationship,
allowing an updated image of the target, which now appears
about 100 m shallower at 3248 m (center). The bit reached the
target at 3202 m (right).

When combined with intermediate VSPs


run during bit changes, the while-drilling
technique can provide a look ahead to
potential deviations from a normal-pressure
regime. Reflections from strata below the
intermediate depth can be imaged with
VSP processing, and with the appropriate
constraints the resulting traces may be
inverted for acoustic impedance. The
acoustic impedance can be converted to
velocity, which is then used to estimate a
pressure gradient or mud weight through
empirical relationships between density
and velocity, and between velocity and
pore pressure.4 Plotted versus depth, these
values define a trend related to normal sediment compaction conditions. Deviations
from the trend often indicate the onset of
overpressure, and they can be seen in seismic-while-drilling results hundreds of
meters ahead of the bit, in time to take
action to drill ahead safely.

Exploration, Not Desperation

The seismic-while-drilling technique has


proved beneficial to drilling and well planning in numerous exploration settings. In
some places, operators wont drill without it.
In the challenging environment offshore
Vietnam, for example, the BP-Statoil
alliance has relied on seismic-guided
drilling to provide a real-time link between
bit position and seismic images. Confident
that the while-drilling data can remove
uncertainty in depth to drilling hazards, the
alliance team has drilled several high-pressure, high-temperature (HTHP) wells with
optimal casing programs, and has been able
to eliminate contingency casing strings from
some well plans.
The exploration province is a basin characterized by normally pressured clastics in
the shallow section, then a severe pressure
ramp, as much as 10 psi/m [3 psi/ft]several times normalcontaining highly overpressured, high-temperature reservoirs. To

Oilfield Review

Tie between surface seismic image


and seismic-whiledrilling data. The
look-ahead VSP
created from
while-drilling
data matches
major regional
reflectors seen in
the surface seismic
line. The match
provides a timedepth tie for predicting depth to
casing points.

Seismic-While-Drilling Data

Two-way time, sec

2.0

2.5

Pressure ramp

Base of
Seismic-While-Drilling data

meet all objectives, wells in the region


require flawless planning, many casing
strings and careful execution (below).
A number of wells in the basin encountered problems and failed to meet their targets. Depth predictions based on surface
seismic velocities carried uncertainties of 50
to 300 m [164 to 984 ft]. Wells drilled in
another part of the basin called for lookahead VSPs to spot casing points, but the
time and costs associated with interrupted
drilling and borehole seismic data acquisition, processing and interpretation led the
BP-Statoil alliance to test the seismic-whiledrilling method.
Setting 13 38-in. casing exactly at the top
of the pressure ramp is the most critical part
of drilling these wells. The well plan dictated contingent casings of 16 and 1134 in.
between the normal 20-, 13 3/8- and 9 58-in.

Pressure ramp

Casing point 1

Casing point 2

Casing point 3

TD

Well plan for high-pressure, high-temperature wells drilled by BP-Statoil alliance


offshore Vietnam. Several casing strings
must be set at points prescribed by a
severe pressure ramp to successfully
complete these exploration wells.

Summer 1997

casing sequence. If needed, the contingent


casings would require underreaming of both
associated openhole sections, and were
consequently a significant additional cost.
The cost of keeping one extra casing string
in the drilling plan, even if not used, was at
least $500,000.
The top of the pressure ramp is marked by a
major change in lithology and fossil content,
and a recognizable, though not ever-present
seismic reflection. Wellsite analysis would
integrate the seismic-while-drilling data with
drilling parameters from MWD, mud logging
and biostratigraphy to predict depth to casing
points and detect pressure increases.

In one example well, reflectors imaged


while drilling above the pressure ramp tie
neatly with those in the surface seismic
image, giving a reliable time-depth relation
for casing placement (above). Velocities
(continued on page 8)
2. Socit Nationale Elf Aquitaine, Paris, France, has
patented this technique: Staron P, Arens G and
Gros P: Method for Instantaneous Acoustic
Logging Within a Wellbore, International Patent
Application under the Patent Cooperation Treaty
No. WO 85/05695 (May 20, 1985).
3. Borland WH, Hayashida N, Kusaka H, Leaney WS
and Nakanishi S: Drill Bit Seismic, Vertical
Seismic Profiling, and Seismic Depth Imaging to
Aid Drilling Decisions in the Tho Tinh Structure,
Nam Con Son BasinVietnam, presented at the
Japanese Society of Exploration Geophysicists,
Kyoto, Japan, October 21-23, 1996.
4. Hottmann CE and Johnson RK: Estimation of
Formation Pressures from Log-Derived Shale
Properties, Journal of Petroleum Technology 17
(June 1965): 717-722.

Drillstring Acoustics

The travel time along the drillstring between the

Drillstring images from a well drilled in the

bit and accelerometer can be determined if the

Raudhatain field in northern Kuwait demonstrate

acoustic velocities of the individual string ele-

the clarity with which the bit signal can be

ments and their lengths are known. It is tempting

recorded.1 Data were acquired while drilling

to approximate the velocity with that of steel

through more than 7000 ft [2134 m] of Tertiary and

pipe16,900 ft/sec [5151 m/sec]. But each added

Cretaceous clastics and limestones with 26-in.,

joint of pipe can have a slightly different velocity,

17 12-in. and 1214-in. bits. The drillstring image

and tool joints modify the string enough to intro-

from the deepest section, though of lower signal-

duce unacceptable errors. For best results, the

to-noise ratio than other sections, still shows clear

travel time must be measured directly.

bit signals (next page, left). The drillstring travel

An impulse of energy traveling upward from the


bit-rock interface encounters a series of

Acoustic Map
of Drillstring
Surface
equipment

Drillpipe

time is easily picked from each trace in the image.


The relative timing, or difference, between the

impedance contrasts as it passes the bottomhole

drillstring travel time and the bit-to-geophone

assembly (BHA), the heavy-weight drillpipe, regu-

travel time is contained in the crosscorrelation of

lar drillpipe and surface equipment, giving rise to

the accelerometer signal with the geophone sig-

signals recognizable in a trace called the drill-

nal, also produced every meter. Adding the time

string image (right). The drillstring image is con-

difference to each trace gives the travel time

structed from the accelerometer signal. A drill-

within the formation for the energy traveling

string image trace can be recorded every meter as

directly from the bit to the geophone. The traces

the bit progresses, but typically 10 traces are

are then processed like a VSP to extract informa-

summed and one is displayed every 10 meters.

tion about reflections, which may be either behind

1. Khaled OS, Al-Ateequi AM, James AR and Meehan RJ:


Seismic-While-Drilling in Kuwait: Results and Applications, presented at the 2nd Middle East Geoscience Conference, Bahrain, April 15-17, 1996.

Drillstring

Heavy-weight
drillpipe

Bottomhole
assembly
Bit

or ahead of the bit.


A drill-bit seismic VSP created in this way can
have many of the features of a wireline VSP, without interrupting the drilling process. The VSP
acquired while drilling the Raudhatain well can be
compared to one acquired on wireline in a neighboring well in the same field (next page, right).

Encounters with drillstring impedance contrasts.


As acoustic energy travels up the string, changes in
shape of the drillstring excite amplitude changes in
the signal recorded by the accelerometer. Interpretation of these changes allows identification of the bit
arrival, establishing a bit travel time.

Both VSPs can image reflections below the wells


total depth.

Oilfield Review

Drillstring Image, 121/4-in. Section

Well A

Well B

Bit signal
6000

6500

7000

1.0

Two-way time, sec

Depth below drillfloor, ft

0.5

7500

1.5

Bit Position

8000

8500

2.0

0.1

0.2

0.3

0.4

0.5

0.6

Drillstring travel time, sec

Tracking the bit arrival. The drillstring image from the 12 14-in. section of the
Raudhatain well shows clear bit signals. The drillstring travel time is easily picked
from each trace in the image.

Summer 1997

2.5

Comparing VSPs acquired in two wells while drilling (Well B) and on wireline
(Well A) with intersecting surface seismic data. The while-drilling VSP displays a
good match with the adjacent surface seismic section.

5. Jackson M and Einchcomb C: Seismic While


Drilling: Operational Experiences in Viet Nam,
World Oil 218, no. 3 (March 1997): 50, 53.
6. Van Derck R, Beck B, Belaud D and Underhill W:
Drill-Bit Seismic, A Service for Well Trajectory
Steering, presented at the Offshore Mediterranean
Conference, Ravenna, Italy, March 19-21, 1997.

Sonic transit time, sec/ft

1000

t-seismic
while drilling
t-wireline
t-norm

100

10
1000

1500

2000

2500

3000

3500

4000

Depth, ft

Divergence from normal velocity trend. Velocities obtained from the real-time seismicwhile-drilling data both set the trend and show deviation from it at the top of the pressure
ramp at 2500 ft. Wireline data acquired later track the while-drilling data closely,
demonstrating reliability of the real-time method. Other wells in the area that are
known to have normally pressured formations exhibit the trend shown by the t-norm
line (black).

3000
2800
2600

Velocity, m/sec

obtained by inverting the look-ahead, whiledrilling VSP allowed a trend to be established and allowed identification of the
pressure ramp starting at 2500 ft (right).
The seismic-while-drilling data supplied
velocities over the pressure ramp that were
quite different from those estimated from
surface seismic processing (below). The
shape of the while-drilling velocity curve
reveals the pressure ramp that is not visible
in the surface seismic velocities. Velocities
from a VSP acquired on wireline essentially
overlie the while-drilling velocities.
Pore pressures estimated from seismicwhile-drilling velocities and other drilling
parameters show the steep ramp encountered
by one well in the basin (next page, top).
Pressures estimated from sonic velocities
acquired on wireline track those estimated
from seismic-while-drilling results, and
demonstrate the ability of seismic-guided
drilling to quantify the change in pressure in
real time.
All the wells drilled incorporating the seismic-while-drilling technique achieved the
objective of pushing 20-in. casing as deep
as possible and locating the 13 38-in. casing
shoe at the top of the pressure ramp without
committing the 16-in. liner. Cost of planning
and setting the 16-in. liner is about $3 million in each well.
The alliance gained experience with the
technique in the first two wells, and was confident, by the time the third well was being
planned, that the integrated approach including seismic-while-drilling data would provide
an accurate prediction on depth to casing
points. For the third well, the contingent
16-in. liner was eliminated from the well
plan, at an additional savings of $500,000.5

2400
2200

VSP velocities
Surface seismic
velocities
SWD velocities

2000
1800
1600
1400
0

500

1000

1500

2000

2500

3000

3500

4000

Two-way time, msec

Discrepancy between velocities estimated from surface seismic data and those measured while drilling and on wireline. Surface seismic velocities fail to expose the pressure
ramp detected by the seismic-while-drilling (SWD) velocities and later validated by wireline VSP velocities.

Oilfield Review

2.4

2.2

Pressure, sg

2.0

1.8

Mud weight
Fracture pressure
Overburden
Leakoff test
RFT/drillstem test

1.6

1.4

Pore pressures
Seismic while drilling
Wireline sonic
Drilling parameter

1.2

1.0
1000

1500

2000

2500

3000

3500

4000

Depth, ft

Compendium of while-drilling and wireline pressure estimates. Seismic-while-drilling data are able to quantify the magnitude of the
pressure ramp at least as well as do drilling parameters and postdrilling wireline data.

Deviations

Summer 1997

Moving array of geophones

500

750

True vertical depth, m

The seismic-while-drilling technique was


designed for cases like the BP-Statoil alliance
example in Vietnaman offshore vertical
well, hydrophones laid on the seafloor and
water depth less than 100 m [328 ft]. But
how well does it work elsewhere?
In June 1996, KUFPEC Tunisia Ltd tested the
technique in a wildcat deviated well on
land.6 The objectives were to acquire realtime, time-depth information to update depth
estimates and to guide the well trajectory to
the reservoir target near a major fault. Conventional borehole seismic surveys were not
feasible because the land surfacea sabkha,
or seasonal marsh that dries with a thin crust
of salt in summerwas inaccessible to seismic sources. Marsh geophones were laid out
above the intended well path, and drill-bit
seismic recording began at 1140 m [3740 ft]
depth, by which time well deviation had
reached 65 (right).
Although a polycrystalline diamond compact (PDC) bit might have given a better rate
of penetration, the entire section was drilled
with a roller-cone bit to ensure better seismic signal. PDC bits do radiate acoustic
energy into the formation, but generate no

1000

Drill bit

Seismic while
drilling the KUFPEC
deviated well in
Tunisia. Since seismic sources could
not operate in the
difficult surface
environment, intermediate wireline
VSPs were not feasible, and Drill-Bit
Seismic information
was used to update
depth estimates
and to guide the
well trajectory.

1250

1500

Drill-Bit Seismic interval


1750

-250

250

500

750

1000

1250

Offset, m

Planned, blind and actual trajectories. Planned is the predrilling trajectory plotted on
the seismic section using the initial time-depth relation. Actual plots the actual deviation profile using the time-depth relation derived from seismic-while-drilling data. Blind
tracks the drilled trajectory with the initial time-depth relationwhere the actual well
would have been plotted had seismic-while-drilling data not been acquired.

axial vibrations in the drillstring. This means


there is no accelerometer signal to correlate
with signals from the seismic receivers, so
the data are not easily analyzed. Drilling
was done as much as possible while rotating
from surface. A motor was used, but only for
short intervals. Drilling while sliding seems
to attenuate the accelerometer signala
limiting factor for success in deviated wells.
The seismic-while-drilling data were
reported dailyor two or three times a day
during critical phasesand the travel times
were used to update the time-depth conversion and the borehole trajectory. The borehole was successfully guided to the target
without intersecting the fault (above). Without the revised time-depth correlation, a
correction would have been performed on
the trajectory to make it fit the planned one,
taking it some 40 m [131 ft] closer to the
fault, and probably crossing it.

10

In Deep Water

The first offshore successes with seismicguided drilling were conducted in relatively
shallow waterup to 70 m [230 ft]. Water
depths greater than this presented problems
in deploying hydrophones on a seafloor
cable extending from the rig.
During the summer of 1996, Enterprise Oil
plc pushed the technique beyond this limit
when they drilled a deviated wildcat well in
the 352-m [1154-ft] depths of the Slyne
Trough, West of Ireland. A knowledge of formation velocities was required to determine
the well trajectory, but all other well data in
the area were more than 130 km [81 miles]
away and unrelated. A while-drilling survey
was planned to track the well path and
identify the point at which 13 38-in. casing
was to be set.
The drill-bit seismic equipment was set up
prior to spud, eliminating rig downtime.
Two accelerometers were connected to the
goose neck on the topdrive and the connecting cable was secured to the mud hose,
standpipes and on to the Schlumberger seismic recording unit. The seafloor cable, con-

sisting of 12 hydrophone-geophone receiver


pairs spaced 12 m [39 ft] apart, was laid in a
straight line from the wellhead (next page,
top). The cable depth and receiver positions
were confirmed by acoustic transponders
and air guns fired from above.
Seismic-while-drilling data were acquired
in the 26-in. hole from 544 m [1784 ft]
measured depth (MD) through the 1712-in.
hole to 1814 m [5952 ft] MD. A roller-cone
bit was used in combination with a mud
motor assembly. Data quality diminished
while the mud motor was in operation, but
was still good enough to give travel times.
Data processed in real time at the wellsite
gave time-depth and formation velocity
information every 10 m [33 ft] of drilled
depth. Interpretation of the time-depth data
produced an average velocity function that
allowed accurate time-depth conversion for
predicting the 13 38-in. casing point. The
time values were acceptable and closely
matched those from a VSP acquired at total
depth (TD).
Conducting the VSP survey at TD and
using seismic-while-drilling technology to
give continuous time-depth coverage rather
than running an intermediate VSP survey to
establish the 13 38-in. casing point saved
Enterprise Oil plc the cost of the additional
VSP survey, which would have been
weather dependent, and associated rig
downtime costs.
While or After Drilling?

The seismic-while-drilling technique has


several advantages over conventional check
shots and VSPs for time-depth data. Foremost, the information is available when it is
most neededwhile drilling is in progress,
in time for decisions that could affect safety
and overall well success. No rig time is lost
acquiring data, and no downhole equipment interferes with drilling. The only modification to a conventional drilling program is
that an accelerometer is mounted on the
swivel. However, there are limitations to
the technique.
Bit typePresently, roller-cone bits are recommended. Some operators report success
using bicentered PDC bits. Normal PDC bits
undoubtedly radiate acoustic energy into the
formation, but they generate almost no axial
vibration in the drillstring. Consequently,
there is no usable accelerometer signal to
crosscorrelate with that of the geophones.

Oilfield Review

Variability of the sourceSince the source


depends on the drilling parameters, the
lithology and the state of the bit, the source
can vary with depth. This can create problems with VSP processing.
Horizontal wellsThough the technique
has been used successfully in wells up to
65 deviation, higher deviations and horizontal wells are expected to have problems.
Signals traveling up the drillstring are attenuated when lengths of pipe are in close contact with the wellbore wall, as they would be
in a highly deviated hole.
NoiseRig noise is detected by seismicwhile-drilling sensors. Although techniques
to remove this noise are available, there will
always be more noise than in conventional
borehole seismic surveys.
Water depthCurrently there is a water
depth limit of 1260 ft [384 m] for deploying
the hydrophone array.
Some of these limitations, such as water
depth, source variability and noise, are
being addressed by efforts in processing and
hardware development. Others, such as the
restrictions to roller-cone bits and deviation
angles less than 65, are now being surmounted by a complementary technique
sonic while drilling.
Sonic While Drilling

Sonic logging while drilling can provide


some of the same answers as seismic while
drilling to questions about formation velocities and depth to target or overpressure hazards. In addition, sonic-while-drilling data
can be combined with other logging-whiledrilling (LWD) measurements to yield sonic
porosity, lithology identification and rock
mechanical properties.
Sonic-while-drilling results and LWD density logs can be combined to produce synthetic seismograms. This allows the geophysicist to correlate the main seismic
reflections with features on LWD gamma ray
or resistivity logs. Thus it is possible to determine if a given reflector has been crossed or
how far it is from the current bit position.
The ISONIC IDEAL sonic-while-drilling
measurements allow this navigation to take
place in real time.
In the sonic logging vernacular, the property measured is slowness, or the inverse of
velocity. This is also known as t, the sonic
transit time per foot traveled.

Summer 1997

Laying out receiver cable for Enterprise Oil plc seismic-while-drilling survey.

Conceptually, measuring sonic slownesses


while drilling is not obvious because of the
noise generated by the bit, but service companies have attacked the problem and triumphed over the four major difficulties documented by pioneers. 7 These were to
construct a tool rugged enough for operation while drilling, overcome drilling noise
in the sonic recordings, suppress highamplitude drill collar arrivals, and process
sonic waveforms downhole to extract t for
real-time transmission to surface by mud
pulse telemetry.

The first problem is solved by mounting the


transmitter and four receivers on a drill collar.
Anadrill standard qualification procedures
were applied in the design and packaging of
the transducers and electronics. The transmitter-receiver separation is similar to that for
wireline tools (below). A downhole microprocessor controls transmitter firing and
acquisition and stacking of waveforms.
7. Aron J, Chang SK, Dworak R, Hsu K, Lau T, Masson
JP, Mayes J, McDaniel J, Randall C, Kostek S and
Plona TJ: Sonic Compressional Measurements
While Drilling, Transactions of the SPWLA 35th
Annual Logging Symposium, Tulsa, Oklahoma,
USA, June 19-22, 1994, paper SS.

The ISONIC IDEAL sonic-while-drilling tool, housing transmitter and receivers in a drill
collar. Transmitter-receiver separation and measurement bandwidth are similar to those
for wireline tools.

11

Charisma Synthetics Workpanel

File Functions Options Selections

RHOB

Impedance

Synthetics

Seismic along TR

DT ISONIC

Impedance

Synthetics

Seismic along TR

Ricker Wavelet
35 Hz

Line: 1, CDP: 6960

Reference Seismic

9000
10,500

10,000

9500

Depth, ft

8500

8000

7500

RPS CDR
GR CDR
RAD CDR

Help

0.5

4.5 60
ohm-m

sec/ft

140 10,500
14,500
kPa, sec/m

Line: 1, CDP 6860-7350

Time-depth tie from combined while-drilling sonic and density data. LWD logs in track 1 indicate the lithology contrast at the top of
the overpressured zone at 10,478 ft. Compressional slowness and density acquired while drilling (track 2) are combined to produce
acoustic impedance (track 3). A synthetic seismogram (track 4), computed from acoustic impedance and an input seismic pulse, is
repeated several times for clarity. This is compared to the real seismic trace along the well trajectory to establish a time-depth tie. The
real trace (track 5) is also repeated several times for clarity. A 2D slice of surface seismic data intersecting the well has been converted
to depth using the time-depth correlation obtained from while-drilling data (track 6).

With signal stacking, drilling noise turns


out not to be a significant problem if the
receivers are far enough from the bit.8 The
ISONIC tool may be placed at different distances from the bit, depending on the needs
of the drilling program. A rule of thumb is
40 ft [12 m], but successful measurements
have been made from 15 to 80 ft [4.6 to
24.4 m] behind the bit, while drilling with
PDC and roller-cone bits, in vertical, deviated and horizontal wells, and in hard or soft
rocks.9 Compressional slownesses have been
obtained in the range 50 to 170 sec/ft, and
shear slownesses have also been recorded
in formations where shear waves propagate
faster than fluid waves.

12

Acoustic energy that travels along the drill


collar directly from transmitter to receiver
without sampling the formation threatens to
overwhelm the desired signals. Periodic
structures such as grooves in the ISONIC
tool filter out the collar waves in a specified
frequency band.
Some amount of waveform processing is
required downhole if a compressional t
measurement is to be available in real time.
A microprocessor and digital signal-processing chip perform waveform filtering and
slowness-time coherence (STC) processing,
and select the compressional slowness from
the results.10 A recently developed algorithm allows a t to be computed from a set
of four waveforms in a couple of seconds
downhole. The t is transmitted to the sur-

face in real time by the mud telemetry system.11 Up to 30,000 sets of waveforms can
be stored in downhole memory and
retrieved after bit runs for optional reprocessing with different parameters.
Sonic Data in Real Time

Sonic-while-drilling data have helped Shell


Offshore safely drill exploration wells in
areas predicted to have abnormally pressured zones above targets in the deep water
of the Gulf of Mexico. To ensure that 9 58-in.
casing be set just above the overpressured
zone identified as a strong reflection on 3D

Oilfield Review

Depth, ft

ISONIC
200

sec/ft

ISONIC ITT
100

900

msec

Drift
1300

-5

msec

Check-shot
values
XX400

XX800

X1200

X1600

Calibrating integrated transit time (ITT) from sonic-while-drilling data with check-shot velocities. Compressional slownesses (track 1)
from the ISONIC tool are integrated with respect to depth to produce the ITT, and compared to travel times measured by check shots
after forcing a match between the two at one depth (track 2). The difference between the ITT and check shots at other depths, called
drift (track 3), is attributed to differences in the physics and scales of the sonic and seismic measurements. In this case the drift is zero.

surface seismic data, the Shell asset team


required real-time information while drilling
the 1214-in. vertical hole section. Anadrill
provided daily transmission of LWD data,
ISONIC t and well trajectory information
via phone line and modem to the Shell
office in New Orleans, Louisiana, USA. The
data were loaded on a GeoQuest Charisma
seismic interpretation workstation in the
same office to allow real-time updating of
the bit location.
The time-depth tie computed from ISONIC
slownesses showed that reflectors were
being encountered about 60 ft [18 m]
deeper than estimated from surface seismic
velocity information. Correlation of LWD
data with logs from offset wells corroborated the deeper interpretation. This allowed
the casing point to be placed closer to the
overpressured zone.
An example from a similar well demonstrates the quality of the time-depth tie that
can be achieved when while-drilling sonic
and density data are combined to generate
synthetic traces for comparison with surface
seismic data (previous page). The 2D section
of seismic data has been extracted from a

Summer 1997

3D cube and converted to depth using the


time-depth correlation obtained from whiledrilling data. The strong seismic reflection at
10,478 ft is the signature of the overpressured zone.
Another method for obtaining a time-todepth relationship is to integrate the sonic t
log with respect to depth. The result, known
as the integrated transit time (ITT), can be
used to convert logs to two-way travel time
for correlation with reflectors on a seismic
section, or to convert the seismic section to
a depth-based plot.
Operators typically calibrate the ITT from
wireline sonic data with check shots or VSPs
to account for differences in the physics and
scales of sonic and seismic measurements
predominantly for velocity dispersion and
fine-layering effects. This is done by matching the sonic ITT to a check-shot time at a
particular depth, plotting the difference, or
drift, between the two at other depths, and
applying the drift correction (above).
Detection of overpressure may also be
achieved by watching for changes in sonicwhile-drilling data trends as the well deepens. For a given lithology, compressional
slowness varies with porosity. Normally,

with increasing depth, increased overburden compacts sediments, decreasing porosity and compressional t. Deviation from
the normal compaction trendincreasing
tmay indicate overpressure in shales.12
8. Minear J, Birchak R, Robbins C, Linyaev E, Mackie
B, Young D and Malloy R: Compressional
Slowness Measurements While Drilling,
Transactions of the SPWLA 36th Annual Logging
Symposium, Paris, France, June 26-29, 1995, paper
VV.
9. Aron J, Chang SK, Codazzi D, Dworak R, Hsu K,
Lau T, Minerbo G and Yogeswaren E: Real-Time
Sonic Logging While Drilling in Hard and Soft
Rocks, Transactions of the SPWLA 38th Annual
Logging Symposium, Houston, Texas, USA, June 1518, 1997, paper HH.
10. Slowness-time coherence (STC) processing detects
coherent signals in the waveforms and computes a
measure of coherence, or semblance. Maxima in
semblance are interpreted as signal arrivals. For
reference: Kimball CV and Marzetta TL:
Semblance Processing of Borehole Acoustic Array
Data, Geophysics 49 (March 1984): 274-281.
11. For background on the mud telemetry system:
Montaron BA, Hache J-MD and Voisin B:
Improvements in MWD (Measurements-WhileDrilling) TelemetryThe Right Data at the Right
Time, paper SPE 25356, presented at the SPE Asia
Pacific Oil & Gas Conference, Singapore, February
8-10, 1993.
12. Hsu K, Hashem M, Bean C, Plumb R and Minerbo
G: Interpretation and Analysis of Sonic While
Drilling Data in Overpressured Formations,
Transactions of the SPWLA 38th Annual Logging
Symposium, Houston, Texas, USA, June 15-18,
1997, paper FF.

13

ISONIC t

Gamma Ray

sec/ft

sec/ft

API

150

50 150

50

5000

Depth, ft

Wireline t

Run 1
Run 2
Run 3

Attenuation Resistivity
150 0.2

Rate of Penetration
150

ft / hr

ohm-m

2.0

Phase Shift Resistivity

ISONIC t

ohm-m

sec/ft

0 0.2

2.0 200

100

6000
XX400

Casing
shoe

XX600

Depth, ft

7000

XX800

8000
X1000

X1200

9000

Onset of
overpressure
X1400

Onset of
overpressure

10,000

X1600

Establishing a trend and departing from it. A trend of decreasing


slowness is delineated by measurements made while tripping in,
despite the coarse sampling. Departure from that trend, detected
while drilling, is noted by the arrow.

This technique was demonstrated on an


exploration well in the Gulf of Mexico
designed to test a section at depths greater
than 15,000 ft [4572 m]. Offset wells indicated an overpressured section would be
confronted somewhere between 8500 and
11,000 ft [2591 and 3353 m]. At 4802 ft
[1464 m], 13 38-in. casing was set, then
drilled out with a 12 14-in. bit. The ISONIC
tool was put in the drillstring at 8783 ft
[2677 m]. In addition to logging while

14

Sonic-while-drilling indication of overpressure in a slow formation.


In these slow rockscompressional slownesses between 140 and
170 sec/ftthe ISONIC tool is able to establish a normal compaction trend and detect deviation from it. Rate of penetration
shows no noticeable reaction to the overpressure beginning at
X1400 ft, but resistivity-while-drilling corroborates the sonic-whiledrilling results.

drilling, the ISONIC tool recorded while


tripping in with each bit run. Tripping-in
speeds of 2500 to 3600 ft/hr [762 to
1097 m/hr] give rise to coarsely sampled t
data and imperfect depth control, but still
allow a trend of decreasing slowness with
depth to be established (above left).
At 9580 ft [2920 m], the while-drilling
compressional slownesses began to increase
relative to those expected for a normal compaction trend. Other measurements and logging-while-drilling data show similar behav-

ior, building a strong argument for the presence of an abnormally pressured zone (next
page). For example, resistivities measured
while drilling show deviation from their
established trend at the 9580-ft mark.
Increased porosity and water content in
overpressured shales lowers resistivities relative to the normal compaction trend.13

Oilfield Review

Gamma Ray
Depth, ft

In another Gulf of Mexico well, but this


time logging in a sonically slow formation,
the ISONIC tool again helps detect the onset
of overpressure (previous page, right). The
normal compaction trend follows decreasing slowness; overpressure is detected at
X1400 ft, where sonic slownesses increase
and resistivity-while-drilling also diverges
from its established trend.
Currently the main advantages of the
sonic-while-drilling method are that the
measurements may be made with any bit
type, and at any depth or well deviation.
However, in contrast to the seismic method,
a tool must be deployed downhole. Also,
the method of matching synthetic traces to
surface seismic traces to establish a timedepth relationship is reliable only if the
sonic travel times have been calibrated by
check-shot or VSP data that provide the total
travel time between the depth where sonic
times were measured and the surface.

API

Attenuation Resistivity

Rate of Penetration
50

ft / hr

ohm-m

150 0.2

2.0

ISONIC t

Phase Shift Resistivity


0 0.2

ohm-m

2.0 150

sec/ft

50

Run 1
8800

9000

9200

9400

Onset of
overpressure

Run 2

9600

Looking Ahead While Drilling

Individually, seismic while drilling and


sonic while drilling have the capability of
giving real-time answers to well drilling and
design questions under appropriate conditions. Together, they have the potential to
provide an accurate look-ahead technique
for predicting the depth to a target or hazard in almost any environment. The new
MACH-1 Seismic-Guided Drilling service
effectively integrates the sonic- and seismicwhile-drilling technologies to bring a realtime solution to time-depth problems associated with locating targets, anticipating
hazards and selecting mud weights and
casing points.
Further work is required to advance the
interpretation of formation velocity information acquired while drilling, especially
when it is destined for overpressure determination. Empirical relationships between
velocity and pore pressure have limitations.
The commonly used expressions were
developed to explain effects seen in rapidly
deposited shales that fail to dewater under
burial and compaction. Better geological
pore-pressure models, and also ones that
work in other sedimentary and tectonic
environments, need to be developed to take
full advantage of the benefits offered by seismic- and sonic-while-drilling data.
More companies are beginning to test
these techniques. In a notoriously dangerous
drilling area of the North Sea, where many
operators have lost wells and experienced
blowouts, Norsk Hydro has used seismicwhile-drilling data along with MWD and

Summer 1997

9800

10,000

10,200

Run 3
10,400

Integrating while-drilling measurements for overpressure detection. At 9580 ft, rate of


penetration, resistivities and compressional slownesses acquired while drilling all show
indications of the onset of overpressure.

drilling parameters to see reflectors 400 m


[1312 ft] ahead of the bit. These surveys, in
the deepest waters tested yetabout
400 mrequired special hydrophones
designed to operate under higher pressures.
On the west coast of Africa, Chevron
Cabinda has relied on ISONIC measurements to monitor progress of a well in
which optimization of mud weight was critical. Real-time compressional slowness measurements transmitted from rig to the operations base resulted in improved drilling
performance and reduced risk. Chevron

anticipates that future operations will combine sonic- and seismic-while-drilling techniques in a single well.
As others gain experience with the technology, documenting the gains in drilling safety
and efficiency that these while-drilling methods supply, seismic and sonic while drilling
will earn their rightful places in the family of
must-have real-time measurements.
LS
13. Hottmann and Johnson, reference 4.

15

Production Logging for Reservoir Testing

Using production logging tools to test wells provides a more accurate analysis of reservoir
parameters, such as permeability and skin damage. Measuring flow rate and pressure
immediately above a producing zone not only reduces wellbore storage effects but also
makes it practical to run transient tests without shutting in a well and halting production.

Pete Hegeman
Jacques Pelissier-Combescure
Sugar Land, Texas, USA

Although production logs are most commonly run to diagnose downhole problems
when surface flow-rate anomalies occur,
these tools can also be used during downhole transient tests to determine reservoir
properties. In essence, measuring the flow
rate downhole, just above the producing
zone, makes for better interpretation
because wellbore storage problems are
nearly eliminated. Analysis of the transients
can yield reservoir parameters such as permeability, skin and pressure at one moment
in the life of the reservoir.1
For help in preparation of this article, thanks to Gilbert
Conort, Schlumberger Wireline & Testing, Montrouge,
France; DeWayne Schnorr, Schlumberger Wireline &
Testing, Anchorage, Alaska, USA; Keith Burgess,
Schlumberger Wireline & Testing, Sugar Land, Texas,
USA; and Grard Catala, Schlumberger Wireline & Testing, Clamart, France.
PLT (Production Logging Tool) is a mark of Schlumberger.

16

The techniques for analyzing transient tests


rely only on pressure measurements and
assume a constant flow rate during the test
period. The constant flow-rate situation, in
practice, prevails only during shut-in conditions. Thus, buildup tests have become the
most commonly practiced well testing
method. Buildup tests, however, are sometimes undesirable because the operator does
not want the production lost or because the
well may not flow again if shut in. In such
circumstances, drawdown tests are preferable. In practice, it is difficult to achieve a
constant flow rate out of the well, so these
tests have been traditionally ruled out.
There are several advantages to testing a
well, either at the surface or downhole, while
flowing. In producing wells, less production
is lost because the well is not shut in. Keeping the well on production is especially valuable for poor producers that may be difficult
to return to production once shut in. In layered reservoirs, testing under drawdown
reduces the possibility of crossflow between
producing layers, whereas during a buildup
test, crossflow can easily occur and complicate interpretation.2 So, testing a well under
flowing conditions can be beneficial.

Accurately testing a well with only surface


flow-rate measurements is difficult in practice. Surface production and testing equipment cannot hold a flow rate constant or
measure flow rate accurately in a short time
frame. This equipment is better suited to
measuring flow rates during long periods
days, not minutes or secondsfor commercial sales volumes or daily production data.
Most surface facilities lack the accuracy to
measure quick changes in flow rate necessary for transient interpretation.
Testing a well with downhole pressure and
flow sensors eliminates some of these complications because flow rate is measured just
above the producing interval. Production
logging tools measure flow rate more accurately than surface facilities do, especially
for any instantaneous or small rate changes.
Flow rates and pressure changes are
closely associatedany change in one produces a corresponding change in the other.
The challenge in well-test analysis is to distinguish between pressure changes caused
by reservoir characteristics and those caused
1. Piers GE, Perkins J and Escott D: A New Flowmeter
for Production Logging and Well Testing, paper SPE
16819, presented at the 62nd SPE Annual Technical
Conference and Exhibition, Dallas, Texas, USA,
September 27-30, 1987.
2. Deruyck B, Ehlig-Economides C and Joseph J: Testing
Design and Analysis, Oilfield Review 4, no. 2 (April
1992): 28-45.

Oilfield Review

The three components of the classic well


testing problem are flow rate, pressure and
the formation. During a well test, the reservoir is subjected to a known and controllable flow rate. Reservoir response is measured as pressure versus time. The goal is
then to characterize reservoir properties.
Complications arise because flow rate is
typically measured at the wellhead, but interpretation models are based on flow rate at
reservoir conditions. Under some ideal conditions, such as single-phase flow and constant wellbore storage, the surface flow rate
can be related to downhole rate, allowing a
good interpretation of the reservoir characteristics. If more than one phase, oil and water
for example, flows in the reservoir or in the
wellboregas evolving out of solutionthen
the interpretation becomes more difficult.
Obtaining interpretable data under nonideal conditions often requires test durations
ranging from days to weeks so that conditions in the wellbore can stabilize. For a typical pressure buildup test, the test would
have to be run until all afterflow and phase
redistribution effects cease. Until then,
reservoir response is masked by wellbore
effects (top right).
Mechanisms that cause wellbore storage
are compressibility of the fluids in the wellbore and any changes in the liquid level in
the wellbore. After a well is shut in, flow
from the reservoir does not stop immediately; rather, it continues at a diminishing
rate until the well pressure stabilizes. Wellbore storage also varies with time due to
segregation of fluids.
Two important advances have significantly
improved control of well testing: downhole
shut-in valves and downhole flow measurements. These techniques have eliminated
most of the drawbacks inherent in surface
shut-in testing, such as large wellbore storage, long afterflow period and variations in
wellbore storage (right).

Summer 1997

Well response without storage


effects but with skin effects
Well response with storage effects
but without skin effects
Actual well response
Pressure, psi

Well Testing

Ideal well response

Elapsed time, hr

Wellbore storage effects. Wellbore storage and skin effects distort the data collected early
in a transient test. Interference from other wells or boundaries affects later parts of the test.
In the purple zone, radial flow occurs, allowing determination of formation permeability.

100

Pressure and pressure derivative, psi

by varying flow rates. The pure reservoir signal can be determined by acquiring simultaneous flow and pressure measurements,
which can easily be obtained in most wells
using production logging tools. The PLT Production Logging Tool string, positioned at
the top of the producing interval, records
downhole flow rate and pressure data
throughout the test.

101

Downhole shut-in
Pressure
Pressure derivative

102

Surface shut-in
Pressure
Pressure derivative
103
102

101

100
Elapsed time, hr

101

102

Downhole shut-in. The main advantages of downhole shut-in are minimization of wellbore
storage effects and the reduced duration of the afterflow period. In the surface shut-in test,
wellbore storage masks the radial flow plateau for more than 100 hr. In the downhole shut-in
test, radial flow is evident after 1 hr.

17

Initial formation pressure

Packer

Shale

Memory
pressure
temperature
spinner

Post-production
pressure

15,000
B/D

High permeability

Shale
4000
B/D

Medium permeability
Shale

500
B/D

Low permeability

Packer

Memory
pressure
temperature
spinner

2000

3000

4000

Pressure, psi

Flow profile from a multilayered reservoir. The PLT tool measures bottomhole pressure
and obtains a flow profile over the entire producing interval.

Drop-off memory logging tool. A batterypowered memory production logging tool


can be run on slickline and hung in a tubing nipple for an extended downhole test.

There are many ways to shut in a well


downhole, from drillstem-test tools to wireline- and slickline-conveyed tools. The
advantage is that no downhole measurement of flow rate is required; however, there
are several disadvantages. This method is
practical only for a shut-in test, so production is lost and returning the well to production may be difficult. Moreover, wirelineand slickline-conveyed valves are complicated to operate and may leak or fail.
Reasons for Downhole Measurements

Another approach to well testing is to measure flow rate downhole with a stationary
production logging tool at or near the top of
the reservoir. The advantage of this method
is that the well does not have to be shut in
for the transient test. Another advantage is
that the stationary production log can be

18

combined with a traditional flow survey versus depth conducted prior to the transient
test and one during the test to investigate
crossflow effects.
Although simultaneous measurement of
downhole flow rates and pressures has been
possible for some time with production logging tools, the use of such measurements for
transient analysis in well testing is relatively
new. A continuously measured flow rate can
be processed with measured pressures to
provide a response function that mimics
what would have been measured as pressure
if downhole flow rate had been constant.
In many cases, particularly in thick or layered formations, only a small percentage of
a perforated interval may be producing,
often because of blocked perforations, the
presence of low-permeability layers or poor
pressure drawdown on a particular layer. A
conventional surface well test may indicate
the presence of major skin damage, but
from the conventional data alone, it would
be impossible to determine the reason for
the damage. Downhole flow measurements
allow reservoir engineers to measure flow
profiles in stabilized wells and calculate
skin effects due to flow convergence. Thus,
they can infer the true contribution that formation damage makes to the overall skin
effect. This information can help design
more effective stimulation treatments.

Downhole flow-rate measurements are


usually obtained with spinner flowmeters
run either on slickline for downhole
recording or on electric line for real-time
surface readout (above left). Continuous
spinners are used to test high-rate wells, to
perform flow measurements inside tubing,
if needed, and to test wells in which
restrictions may prevent operation of a fullbore spinner. Continuous spinners are inline and allow use of a combination of
tools, including a fullbore spinner, below
in the tool string. Fullbore spinners are routinely used and considered as the reference, and they are used in deviated wells.
Layered Reservoir Testing

Most of the worlds oil fields have layers of


permeable rock separated by impermeable
shales or siltstones, and these layers usually
have different reservoir properties. If all the
layers are tested simultaneously and only
downhole pressure is measured, it is impossible to obtain individual layer properties
(above). Thus, special testing techniques are
needed for layered reservoirs.

Oilfield Review

Depth, ft


,,
,,



,,



,,



,,






Apparent Apparent
Fluid
Fluid
Velocity, Density,
g/sec
g/cm3
0

70 0.9

Gas
Temp,
F

13 140

Pressure,
psi

155 1400 2000

Oil
Water

n
al gradie
Geotherm

IPR curves of a multilayered reservoir. A selective inflow performance test was run to
determine the IPR curve for each of the four producing layers in a well. The static pressure of each layer can be estimated from the point at which the individual IPR curve
intersects the vertical axis.
6000

Sandface pressure, psi

5500

5000

3. Schnorr DR: More Answers from Production Logging


Than Just Flow Profiles, Transactions of the SPWLA
37th Annual Logging Symposium, New Orleans,
Louisiana, USA, June 16-19, 1996, paper KK.

Total
B

4500

Two economical methods of using production logging tools to perform multilayered


reservoir tests are selective inflow performance tests and layered reservoir testing.
Selective inflow performance tests are performed under stabilized conditions and are
suitable for medium- to high-permeability
layers that do not exhibit crossflow within
the reservoir. Layered reservoir testing is
conducted under transient conditions. Pressure and flow measurements are used to
determine the optimum production rate for
all producing layers.
The selective inflow performance test can
provide an estimate of the inflow performance relationship curve for each layer.3 As
the well is put through a stepped production
schedule with various surface flow rates, the
production logging tool measures the bottomhole pressure and flow profile at the end
of each step. From these production profiles,
an inflow performance response (IPR) curve
can be constructed for each layer using the
data from all the flow profiles (left).
Although a selective inflow performance test
provides formation pressure and IPR for
each layer, it does not give unique values for
the permeability and skin factor of individual formation layers.
If a reservoir has multiple layers, a transient test in which only downhole pressure
is measured is virtually useless. All the layers have wellbore pressure as the common
inner boundary, so the pressure alone does
not convey enough information to determine the properties of the individual layers.
(below). At best, analysis of the pressure will
yield average properties of the entire system.
Flow rate, not pressure, indicates the properties of the layers. Good zones make large
contributions to the total flow, whereas the
poor layers have only small contributions.
The method used to test a multilayered
reservoir is to measure the contribution of
each layer during the transient test.

4000
0

20,000

40,000

60,000

Flow rate at surface conditions, B/D

Layered reservoirs. This pressure profile shows differential depletion


of up to 800 psi [5515 kPa] between layers (A, B, C and D). Crossflow
will develop in this reservoir when the well is shut in.

Summer 1997

19

Surface flow rate

11

15
Buildup

2
Time

The layered reservoir test requires careful


planning and rigorous wellsite logging procedures because of the numerous events that
can occur during a test. Interpreting layered
reservoirs is complex because it involves
both the identification of the reservoir model
and the estimation of unknown parameters
such as permeability, skin factor, reservoir
geometry and pressure for each layer.
Combining a selective inflow performance
test with a layered reservoir test yields the best
results for multilayered reservoirs, especially if
there is multiphase flow inside the casing.
Outlook

Station
1

16
10

12
13

Layer
1

Tool
trajectory

Station
2
Layer
2

14

9
8

Pressure and flow rate

Time

Pressure shift
due to tool
repositioning
Pressure

Qt

Qt
Flow rate (Q)
Q2
Q2
Time

Crossflow

Simplified layered reservoir test sequence. Layered reservoir tests are multiple-rate tests
in which stationary measurements of downhole rate and pressure are conducted above
each layer, and flow profiles are acquired across all layers just before the surface flow
rate is changed. In this two-layer test, the flowmeter was placed in two locations, above
the topmost layer (Qt) and between the layers (Q2).

In addition to measuring flow profiles, layered reservoir tests acquire downhole pressures and flow rates versus time during each
flow period (above). The PLT tool takes these
measurements as it is stationed between lay-

20

ers and above the uppermost layer. Taking


measurements at these stations, in effect,
separates the layers. Bottomhole pressure is
recorded continuously, but the rate per layer
is measured only at a discrete time interval.4

Well testing remains of fundamental importance in the development of oil and gas
reserves, and production log flow measurements provide a valuable tool to evaluate
well and reservoir performance. The trend is
for the continual refinement of data acquisition and interpretation techniques, with a
push for downhole measurement whenever
possible. Recent tool advances improve
measurements in deviated, multiphase-flow
and low flow-rate wells which have often
posed problems for traditional spinners.
Horizontal wellbores and associated completion designs present several challenges to
profile interpretation for conventional production logging sensors and techniques.
Testing and interpretation are better understood in vertical wells than in horizontal
wells.5 Wellbore storage effects, phase segregation and complex geometry in horizontal
drainholes complicate analysis of downhole
flow-rate measurements. Advances in
numerical modeling techniques are overcoming some of the limitations by allowing
better model matching and earlier determination of the flow regime.
As a result, production logs can be used to
choose intervals that should be tested selectively, and new selective test procedures will
help analyze limited sections in horizontal
wells. In the future, these selective tests and
numerical modeling will help reservoir engineers better identify formation property variations along the drainhole.
KR
4. Layered Reservoir Testing, Middle East Well Evaluation Review no. 9 (1990): 22-47.
5. Clark G, Shah P, Deruyck B, Gupta DK and Sharma
SK: Horizontal Well Testing in India, Oilfield Review
2, no. 3 (July 1990): 64-67.

Oilfield Review

Changing the Shape of E&P Data Management


Thirty years ago, oil companies owned drilling rigs. Twenty years ago, they owned seismic
vessels. Today, both activities have been outsourced. Now, management of oil company
dataoften called the very heart of a company's valueis also being outsourced. Here is
a look at three companies who have chosen this path, and their observations on what
they've learned so far.

Robert Beham
Conoco Inc.
Aberdeen, Scotland

Ignacio Layrisse
Intevep, S.A.
Caracas, Venezuela

Alastair Brown
Chris Mottershead
Jane Whitgift
BP Exploration Operating Co. Ltd.
Aberdeen, Scotland

Jairo Lugo
Lagoven
Caracas, Venezuela

Joe Cross
Conoco Inc.
Lafayette, Louisiana, USA
Louis Desroches
Caracas, Venezuela
Jrgen Espeland
Aberdeen, Scotland
Michael Greenberg
Paul Haines
Ken Landgren
Houston, Texas, USA

Summer 1997

Orlando Moren
Eugenio Ochoa
Petrleos de Venezuela, S.A.
Caracas, Venezuela
Dennis ONeill
Houston, Texas
Jim Sledz
Conoco Inc.
Houston, Texas
For help in preparation of this article, thanks to John
Adams and Alan Woodyard, Conoco Inc., Houston, Texas,
USA; Cyril Andalcio, Alan Black, Herve Colin, Alberto
Nicoletti, Todd Olsen, Matt Vanderfeen, GeoQuest,
Caracas, Venezuela; Carmelo Arroyo, Zak Crawford,
Geraldine McEwan and Tom ORourke, GeoQuest,
Aberdeen, Scotland; Bill Baksi, Meyer Bengio, Knut
Blow, John Dinning and Mike Rosenmayer, GeoQuest,
Houston, Texas; Karen El-Tawil, Geco-Prakla, Houston,
Texas; Jay Haskell, Schlumberger Oilfield Services,
Caracas, Venezuela; Brent McDermed, Legacy Solutions,
Houston, Texas; Tony Oldfield, Schlumberger Integrated
Project Management, Aberdeen, Scotland; John Pooler,
BP Exploration Operating Co. Ltd., Aberdeen, Scotland;
David Scheibner, Schlumberger Austin Product Center,
Austin, Texas; Gustavo Valds, Centro de Petrleos de
Venezeula, S.A., Caracas, Venezuela.
LogDB, LogSAFE, Finder and FMI (Fullbore Formation
MicroImager) are marks of Schlumberger. Dwights is a
mark of Dwights Energydata, Inc. Openworks is a mark
of Landmark Graphics. Oracle is a mark of Oracle Corp.
PI is a mark of Petroleum Information/Dwights LLC.
POSC is a mark of Petrotechnical Open Software Corp.
QC Data is a mark of QC Data Holdings Limited. Sun is
a mark of Sun Microsystems, Inc. UNIX is a mark of
AT&T. VAX is a mark of Digital Equipment Corp.

The long, hard pull through a decade of low


oil prices yielded some sweet rewards for the
exploration and production (E&P) industry.
Many inefficiencies in E&P operations have
been overcome; organizations have been
downsized, rightsized and reengineered to
strike quickly at short-lived opportunity; more
powerful technologies are efficiently mobilized; the fastest computers and the ablest
software are purring along; and multidisciplinary teams are empowered to maximize
the motivation of each player. Economic
necessity and a new posture of looking forward rather than just surviving provided the
impetus for all these advances. And yet, still
more is needed to reach even higher levels of
productivity. What do you do for an encore?
You set upon the thorny, unglamorous and
indispensable world of data management.
Today, with E&P organizations driving to
enhance productivity and recovery ratio,
data management has moved into the mainstream. The transition has taken some time.
In the early 1990s, the E&P community
focused on cost containment, and to contain
costs in data handling, the first bits of data
began to trickle off legacy mainframes and
onto distributed desktop workstations.1 It
was a new world, and the questions of the
day concerned mechanics and economics
How does it work? How much will it save
us? What is the best technology and work
process for our needs? By the mid 1990s, the
1. Balough S, Betts P, Breig J, Karcher B, Erlich A, Green
J, Haines P, Landgren K, Marsden R, Smith D,
Pohlman J, Shields W and Winczewski L: Managing
Oilfield Data Management, Oilfield Review 6, no. 4
(July 1994): 32-49.

21

desktop environment began to mature and


stabilize. Data storage problems diminished.
Software grew more robust and powerful.
The walls between disciplinesdata
silosbegan to crumble. The focus in data
management shifted from savings to value
creation, still with an eye to making the technology work and to the promise of seamless
data integration.
By the late 1990s, the data management
story has changed as it moved to the highpriority list. No longer is technology itself the
main concernnot only because its novelty
has worn off, but also because its rapid cycle
time has elevated expectations. Now, users
dont want more technology; they want the
technology to disappear, to become transparent. Data management issues today are
less about technology, and more about
changing the business process to become
more efficient: How do we improve the way
we manage data and who does what? If we
outsource, what do we keep doing, what do
we outsource, and how do we manage it?
Changing Data Management
is Changing Culture

A major trend today in E&P organizations is


learning how to successfully outsource all,
part or some of data management (right).
While outsourcing is far from new, it is new
to the field of E&P data management.
Several forces are driving this change. One
is the growing volume, diversity and complexity of the data itself. Not only is there
more data, but there is more data in a greater
variety of formsdigital files, images, tables,
text, maps and physical specimens, such as
fluids, cores and thin sections; data from
more vendors and from an ever-increasing
number of partners; and data from various
generations of processing and interpretation
(next page). This means the task of management has grown along with the complexity of
data. In addition, as data diversity and data
volume grow, the number of interpretable
relationships between data entities can
become unmanageable. People can typically
handle about 10 to 15 unrelated items at a
time, said Knut Blow of GeoQuest. Some
very good people can handle 25 items. But in
complex settings, there may be upward of 50
different pieces of information that need to be
integrated. The future will require developing ways to handle shortcuts and refine information about the information, to break it into
manageable pieces.

22

The Who, What, Where and How of E & P Data


Oil
Company

Who

Data Management
Partner

Partner staffing
responsive to work demands

100% oil
company staff

Oil company data

What

(Oil company retains ownership of data )

Where
All at
oil company premises

How

Legacy systems
(many solutions)

Local or regional data center


serving distributed environment

Commercial systems
(few solutions)

Comparison of how data have been handled in the past, and the direction taken by
some oil companies toward outsourcing. In general, the traditional approach of the oil
company was my data, my people, my machines, my software, on my premises.
Today, my data stays the same, but everything else changes. Oil company personnel
are still involved in overseeing management, and in high-order data validation, which
requires intimate local knowledge, or which invokes a significant HSE concern. Similarly,
it is likely that at least some data management will continue to be conducted on oil
company premises, although many operators are moving some tasksespecially those
performed easily in batch modeoff-site to regional data centers.

A second factor is the drive to increase the


productivity of the E&P process. At BP, for
example, a chief motivation is to free geoscientists to concentrate on building reserves.
For Petrleos de Venezuela, S.A. (PDVSA), it
is to double production by increasing internal efficiency and by opening assets to partnerships with multinationals, both of which
require state-of-the-art data management. In
most organizations, improvements in productivity require more streamlined and efficient data handling.
Other driving factors are cost reduction
and a refocusing within operating companies on the core business of finding and producing oil. For cost reduction, significant
savings come from outsourcing management
of large volumes of bulk data to shared
infrastructures, either government centers or
commercial operations. The emphasis on
core business follows the goal of increasing
production with the same personnel levels,
and therefore having geoscientists focus on
finding and producing oil, not on noncore
activities like data management.

But why turn outside for solutions? Not all


E&P companies do. Some still prefer to build
and operate their own systems. The growing
numbers that have sought outside assistance,
however, often have similar observations:
We could not afford to do it all and be
the best.
Service companies are in a better position
to advance certain technologies efficiently
and stay ahead of the fast rate of technological change.
Our in-house system worked fine, but
sharing data with a bigger group of partners was difficult.
We now probably spend 10% of our time
manipulating data, down from about
30% before.
Some report difficulty in making the transition, and some initial loss of efficiency.
Whether this dip happens, and its duration
and depth, depended on conditions before
the change.

Oilfield Review

Where generations of
data meet. Geoscientists, graphics specialists and data processors at GeoQuest in
Caracas work side by
side on different parts
of the same data set for
a PDVSA project. Proximity saves time in
collaboration on data
updating and quality
checking.

We did not experience a decline in productivity after initiating our new approach,
said Chris Mottershead of BP. For us, the
data themselves werent the source of inefficiency. Our data were in good shape. It was
a people problemwe were down to so few
people who understood data management.
This was a key motivation for us to quickly
bring our data partners up to speed and
make the system work.
For Conoco, which started its effort after
BP, the benefits are still in the future, as
efforts now focus on building a critical mass
of high-quality, validated data. We dont see
the benefit yet, said Jim Sledz. We wont
see it for a while. Were still laying the foundation for the house.
As part of the move toward outsourcing,
oil companies are taking three approaches
to shared data centers, all of which offer an
efficiency of scale not possible for a single
operator: government-led efforts, operator
initiatives and service company initiatives.
National archives have been in place for
some time in the UK, Norway, Kazakhstan
and Algeria. The goal common to all of these
efforts is to reduce the cost of data handling
by reducing the need for multiple copies and
by gaining economies of scale associated with
centralized data stores. In the last few years,
these efforts have become more active in processing data requests, and are starting to function more as live libraries than archives.
One of the more ambitious operator-led
efforts was initiated in 1995 in London,
England. A consortium of 40 operating com-

Summer 1997

panies working in the UK sector of the North


Sea formed the Common Data Access
(CDA), an archive near London that houses
all partner data and public data, in which
firewalls protect data privacy. Examples of
vendor-sponsored efforts are the LogSAFE
program and the Houston Data Management
Center, which was established this
September for companies operating in North
America. The LogSAFE program is an on-line
service providing log data archiving and
transmission, with data access in 24 hours or
less. Active since 1991, the program now
covers the most active areas of the North
American market.
In the move toward outsourcing and data
consortia, all oil companies are seeking
essentially the same goal with their data
management: immediate, seamless and
secure access to the greatest possible diversity
of relevant, high-quality data and interpretations. With this capability, geoscientists can
entertain more what-if scenarios in less
time and develop a higher degree of certainty about each decision. Evaluation of more
prospects and higher certainty ultimately
translate into fewer lost opportunities and
greater overall productivity.
There are nearly as many paths to this vision
as there are travelers on the path (see How
Do You Relate to Data? next page). Within
each operating company, there are often three
types of people who share the same goal but
have varyingand sometimes conflicting
views of how to get there. These groups can be
roughly divided into three categories: Data
Visionaries, Geoscientists and Data Managers.

Data VisionariesTypically a mid-level


manager, sometimes at the VP level, charged
with framing a vision and building consensus
and mechanisms to achieve it. They may have
arrived at this position through the geoscience
path. Visionaries view good data management as a key to higher productivity and a
tool for better risk management. They may not
view all of it, however, as a core expertise.
Visionaries tend to view software and hardware as less is more, and favor more sharply
defined use of technology to solve problems
at the heart of value generation.
GeoscientistsCharged with getting the job
done, the geoscientist is often the focus of process reengineering. He or she has the most
intimate knowledge of information requirements, yet sometimes has conflicting interests
in data management. Geoscientists may load
or verify their own data because they dont
trust the existing system, or because of the
temptation to do it all myself because I can.
And yet, they ultimately want to have as little
as possible to do with data management. For
them, data management is nonproductive
time, diverted from picking formation tops or
fairways. Because geoscientists may have
seen data management initiatives come and
go, they may rely on proven and often proprietary home-grown solutions that enable individuals to work, but prohibit the flow of
knowledge to others. Nevertheless, they are
often open to better ways of working.
Data ManagersData managers oversee a
team that loads, formats and manipulates
data, and performs some data quality assurance, often up to the gray area that marks the
beginning of interpretation. The data manager
and the team members may have come from
a systems science or computer science background and have some degree of geoscience
exposure, often from on-the-job training.
Data managers view data management as
the lifeblood of the E&P process and therefore as a core expertise.
To understand the diversity of these views
from a variety of organizations, Oilfield
Review asked the same six questions of E&P
professionals in three oil companies:
PDVSA, BP Aberdeen, Scotland and Conoco
USA. This trio represents three approaches to
data management. PDVSA, a major national,
is in the early stages of revamping its data
management to attract multinational
investors and double production by the year
2007. BP Aberdeen is the most advanced
and aggressive in outsourcing, and Conoco
is taking a middle ground in outsourcing.
From each, a range of perspectives is represented, from geoscientists and IT support
people in the trenches to data visionaries.

23

How Do You Relate to Data?

It is written, in the apocrypha of information science,


that how you think about and work with data relates to
your age. If you want to move out of the box you are
inmainly, if you want to act younger than you are

Petrleos de Venezuela,
S.A. (PDVSA)
Caracas, Venezuela

Ignacio Layrisse
General Manager of
Exploration and Production
Intevep, S.A.

you will have to work at it, like learning a new language.


As with all generalizations, exceptions abound. And
within organizations, nothing accelerates a culture
change as much as a resounding success with a new
method. But at the beginning of a new initiative in data

Jairo Lugo
Information Technology
Exploration Geologist
Lagoven

management, the reaction to the change may follow


these lines....
People work with data as if they belong to one of
three categories: Baseball Kids, TV Kids or Video
Kids.
Baseball Kids (age 50+) grew up before television

Orlando Moren
E&P Consultant for the
BADEP project
PDVSA

became a defining cultural medium. They played


baseball (or soccer, or cricket) instead of watching TV.
They learned computing as adults and are occasionally comfortable with it. They might have written
some code, and might even have an advanced degree
in computer science. Today, they are senior managers

Eugenio Ochoa
Manager of E&P for the
BADEP project
PDVSA

and executives. They keep a pocket calendar that they


update in pencil.
TV Kids (age 30 to 49) grew up with television,
probably did FORTRAN programming on punch cards in
middle school and cut their teeth on home-built
microcomputers. They equate power with hardware
and software: The more tools I have, the more control I have; the more control I have, the more I can
do. They are least likely to willingly surrender control
of data or equipment. As geoscientists, they tend to
have multiple computer screens, disk drives and tape
drives. They enjoy writing code and take pride in their
macros. They have a calendar program on their desktop computer.
Video Kids (age 30 and under) grew up with video
games and by the time they left college, the dormitories were wired to the Internet. They never used a
typewriter or carbon paper. They are not likely to care
about control of data, since they are used to data
being served to them. They do not care where data
reside. Their sense of power resides in work they do
with data, not on data. They can write code, but prefer to let others do it. They organize their life with a
PDA, which they download to their notebook computer; they eschew paper. (PDA: a personal digital
assistant, which fits in your palm, see local Video Kid

Venezuela produces 3.2 million barrels of oil


per day, ranking as the fifth largest oil producer worldwide.2 All upstream and downstream operations are overseen by an arm of
the government, PDVSA. Under PDVSA the
three major exploration and production affiliates are Lagoven, Corpoven and Maraven.
INTEVEP is the research and development
arm of PDVSA. In January 1997, PDVSA
spun off data management to a project
called BADEP, which is coordinated by representatives from PDVSA, its affiliates and
GeoQuest.3 Its objective is to migrate data
management from legacy systems and onto
standardized, commercial data management
and interpretation systems. The BADEP team,
which by year end will consist of about 150
people at 21 sites, also establishes services
lines to support about 2000 data users, and
performs data loading and some data quality
assurance (next page, top).

The BADEP approach was viewed as a way


to most rapidly advance the data management
system and to attract investment from multinationals. This investment is seen as a necessary step in the effort to double Venezuelas oil
production by 2007. Training needed for
migration to newer data management systems
is coordinated through the education arm of
PDVSA, Centro Internacional de Educacin y
Desarrollo (CIED), located in Caracas.
As one of the largest corporate-wide efforts,
the data management task is huge. It
involves three main steps. The first is migration to the Finder system, and creation of a
common database system for all three affiliates (next page, bottom). The second is validation of data, some of which date back
nearly 80 years. This step alone will take several years to complete. And the third is to
develop a work flow to capture, load and
validate new data, and establish a means to
update interpretations and make data corrections. During all three steps, effort is focused
on moving users from the old concept of
data management as a safe for data,
toward a new concept of live information
managementmaking sure value added by
interpretations is preserved and accessible.
What are your top data management
challenges today? What were they three
years ago?
All
Improving productivity of geoscientists to
reduce the time they spend accessing data.
Improving data quality. Data up to five
years old are in satisfactory shape, but older
legacy datawe have wells going back to
1920are in questionable condition.
Creating an integrated working environment, in which the data, systems and
applications function as a single unit. Now
we have geophysical data in one database,
core data somewhere else, logs in a third
place. We want geoscientists to have it all
at their fingertips, and to be able to work as
if all data reside in one place.
2. BP Statistical Review for 1996: the top five producers
are Saudi Arabia, USA, the Russian Federation,
Mexico and Venezuela.
3. BADEP is a Spanish acronym for exploration and
production database: Banco de Datos de
Exploracin y Produccin.

for a demo.)

24

Oilfield Review

Orlando Moren
My first priority is to manage not just the
library, but the meaning of the library. As we
continue to outsource a larger volume of our
data management, we need to find a way to
include not just the numbers, but also the
interpretations.
Data volume is an issue. We simply have
more data today than ever before. So any
management system we establish must be
powerful enough to function with an exponential growth in data. At the same time,
we may be focusing on an increasingly
smaller percentage of data. So we need to
be able to quickly get to the most important
data, which are buried deeper and under a
growing pile.
Trust is also a key factor. People used to
rely on their own resources. Now, I am asked
to give my knowledge to someone else. Trust
has to come from the top down, and that
trust needs to be built with milestones everyone can see. In the first stages of outsourcing,
it is essential to have an early and persuasive
success to build confidence when trust is just
beginning to be built.

What data loading looks like. Jesus Pagua, a member of the BADEP team at PDVSA,
loads data into the LogDB system for Corpoven.

How do you measure the efficiency of your


data management in financial terms? Give
an example of how a change in work process or technology provided you with a significant efficiency gain.
Orlando Moren
One measure of efficiency is the length of
the data path from the source to the
database. The longer the path, the lower the
efficiency. Three years ago, it took a person
on the rig an hour to fill out the daily drilling
report, then send it to the regional center.
Now, the system on the rig is wired to a
database at the regional center. This allows
rig personnel to compare their well with others in the area, and then drop the report
directly into the database.

Training for the move to a new data management system. Cyril Andalcio of GeoQuest,
center, runs a training school that teaches PDVSA affiliate geoscientists how to use new
data interpretation and management tools. Across all PDVSA affiliates, more than 400
data users will receive training at Centro Internacional de Educacin y Desarrollo (CIED),
the companys education center in Caracas, Venezuela.

Summer 1997

Ignacio Layrisse
Our gas-lift operations in Maracaibo involve
6000 wells, probably the largest such operation worldwide. In our gas-lift programs, the
process from starting analysis to taking action
took about 10 days. Lagoven took data from
all 6000 wells and, after cleaning up the data,
put them into a regional database. Using a
parallel processor, now two or three engineers can do in two or three hours what took
20 engineers more than a week.

25

BP Exploration Operating
Company Limited
Aberdeen, Scotland

Based on your experience, what are the


hidden costs of data management?

What is your data management dream?


How do you imagine your data management in the year 2000 will be different from
what it is today?

Orlando Moren
The problem is often having too much
technology, and redundancy between
databases. At worst, 70% of the time can be
spent trying to find data.

Jairo Lugo
My dream is to have it all on a map, to have
everything linked to a geographic coordinate.

Jairo Lugo
Poor tape storage can be a hidden cost. If
tapes need to be transcribed, then the added
cost of better storage is trivial compared to
the cost of recopying.

Orlando Moren
All revisions are automatically updated.
My dream is for a customized data system
that knows what I want to know. It might
even learn from my queries.

In making the transition to outsourcing of


data management, what are the foremost
benefits? The foremost pitfalls?

Eugenio Ochoa
By the year 2000, the walls between geoscience disciplines will have come down
completely. The geologist, geophysicist and
reservoir engineer, for example, will be
working in a seamless asset team using what
appears to them as a single database. All
data are immediately available to all team
members to view, interpret and use. There
are no separate reports by discipline, but the
team makes one report, with all knowledge
shared in one database. This will be necessary to get to the next level of integration:
continuous, real-time dynamic updating of
the field model.

All
Benefits:
Cost savings.
Allowing geoscientists to concentrate on
their core business: evaluating prospects
and finding oil.
Incorporation of best practices, drawing on
a vendors broad experience.
Pitfalls:
Retaining data translators, people that
speak both the language of data management and of geoscience.
Assuring that the outsourcing vendor
invests an emotional account in our data,
to be assured that they are safeguarded. I
prefer the word alliance to outsource,
Ignacio Layrisse said. Outsourcing sounds
like I am getting rid of my data. In an
alliance, the vendor keeps in contact with
me and my realityit is like running a
three-legged race. You can win if you coordinate your efforts, but you fall flat if you
are out of step.
Protecting data confidentiality.
Which data management functions are
considered core expertise?
All
Data quality assurance remains a core
function at PDVSA. Secondly, we need to
keep some data management expertise inhouse to understand the commercial data
management system and provide constructive feedback to determine its optimum use.

26

Ignacio Layrisse
To reach this next level of integration we
need a culture change, toward the Japanese
model of the team, in which responsibilities
are shared and felt by all team members. For
people to get out of the mindset of these are
my data and part of my pay is what I know,
they need to achieveas Orlando suggested
earliera strong and early success in a team
enterprise. An indisputable success is the
only way to achieve the culture change.

Alastair Brown
Senior Petrophysicist/Geologist

Chris Mottershead
Technology Business Manager

Jane Whitgift
Business Information Manager

Nearly a quarter of the oil consumed in the


UK comes from North Sea fields operated by
BP, and the majority of those operations are
coordinated from BPs center in Aberdeen.
As part of a larger effort started four years ago
to lower cost and improve its productivity,
BP Aberdeen moved aggressively toward
outsourcing of data management. In two
years, from 1993 to 1995, the cost of data
management fell by nearly 60%. Now an
industry leader in this approach, BPs operations have drawn attention across the industry. Chris Mottershead, the technology business manager who helped frame the new
path for BP, has consulted with visitors from
nearly all petroleum provinces.
Today, GeoQuest handles BPs data management and a major portion of graphics
support, and provides information technology supporthardware, local- and widearea networking and desktop support
under a teaming arrangement with Science
Applications International Corporation
(SAIC). Around two dozen GeoQuest people work for BPa small service delivery
team is based onsite while the bulk of staff
is located in the GeoQuest Service Centre
about seven miles [11 km] from the BP
offices (next page, top). Well, wireline and
seismic data are all stored at the offsite
location and accessed from the customers
desktop via a 34-MB/sec network link.

Oilfield Review

Geraldine McEwan, left, with Jennifer Laird restoring BP map files from backup in the
server room at the GeoQuest office in downtown Aberdeen. The bulk of BP digital data is
managed from this site and served to users at the BP office on the other side of the city.

BP Aberdeen
A Data Management Approach

1%

% data managed by BP, over time

5%
shared earth
model

A BP Aberdeen view of data management. BP strives for full integration


between its three main data sets: subsurface, wells and facilities. Ultimately, a
user would be able to bring up a seismic
trace or a finance memo with equal ease.
On the vertical axis, data concern only
facts, such as porosity is 20%. Information would be 20% porosity means we are
above the Dunlin formation; knowledge
would be: we might get a water influx
and understanding would be, despite all
the indicators, we will not get a water
influx. On the axis coming out of the
page, BP views the shared earth model as
allowing us to worry about managing
only the few percent of data that are critical to our business decisions, said Chris
Mottershead.

Subsurface

Outsource

Facilities

Structured

Unstructured

im
rt

ov

Wells

Partnership

Internal
100%

,
lue

BP groups its data into two categories,


structured and unstructured. Structured data
are electronically managed, usually by a
database system, providing security, integrity
and lineage control. These dataincluding
well, geological and production information,
well logs and seismicare managed by
GeoQuest as a shared resource for all appropriately authorized users. By volume, about
80% of data are structured (below).
Unstructured data reside with, and are
controlled by, geoscientists and generally
consist of live, working documents, such as
maps and reservoir simulation outputs of the
users. Unstructured data are available to,
and useable by, a few individuals and small
groups. Knowledge of integrity, currency and
lineage, however, generally resides with only
one individual.
In our projects we try to structure as much
data as possible, Chris Mottershead said,
but in the end, we live with the contradiction that our smallest but most valuable volume of data is not structured. We can see a
future in which Web-based search engines
will provide tools to allow us to flexibly store
and retrieve these unstructured data.

Va

Types of data, over time


Data

Information
BP today

Knowledge
Understanding

Summer 1997

27

What are your top data management


challenges today? What were they three
years ago?
Chris Mottershead
Three years ago, our goal was to lower
data unit costs, such as the cost of data management per head, per barrel of reserves
added or as a percentage of lifting cost. We
lowered these costs so efficiently, that we
found ourselves with only a few people who
knew data management. In a way, we had
become victims of our own success, and
placed ourselves at risk. What if our handful
of key people left, or moved to other jobs?
We saw outsourcing as a means to lower
this risk and develop a single reliable source
of data management.
When we talk to other industry people
about why we chose this path, there is a
temptation to take the BP solution and plug
it in elsewhere. What we did, however, was
motivated by our business structures and
goals: because it works for BP does not
mean its a plug-and-play for everyone.
Our goal is to double the productivity of
our staff to meet our aggressive hydrocarbon productivity goals. We dont have the
people to do it, so the only way to grow is
to try a different approach. The way we
have chosen is to make geoscientists more
productive by letting them shed tasks not
core to growing productivitythat is, outsourcing data management.
Jane Whitgift
Our business structure is crucial for making
this approach work. BP functions as 12 independent businesses that all focus on the
same goal of growing reserves and production. Outsourcing of data management is
one device to free geoscientists to be more
productive. A related approach is to create
learning, integrated business teams. In the
past, there were walls between disciplines.
The geophysicist might pass his or her information to the facilities engineer and that
would be the end of their communication.
The facilities engineer might not understand
the implications of the geophysics, and as a

28

result, facilities might be underdesigned.


Now, people work in fuzzy teams, so the
geophysicist knows which data are of interest to the facilities engineer, and the problem
of underdesigning may disappear. The idea is
that if data are shared by an integrated team,
each team member will have less data to
consider, and only the most important data.
Alastair Brown
Many of the technical problems in data
handling of five years ago have diminished.
This allows geoscientists to be more productive by creating more time to interpret the
data to further enhance the finding and production of additional hydrocarbon reserves.
The second challenge is rapid turnaround
of data interpretation, measured from the
time the tool comes out of the hole to the
time we have an interpretation. Now we can
have an interpretation on the desktop in a
few hours. Three years ago, it would have
taken longer since the process was more
awkward and labor-intensive.
The third challenge would be cross-platform communication. We still have two difficulties here: links can corrupt data, and we
need to solve the problem of linking multiple
databases and multiple formats of data. In
this respect, the demands of data are always
a step or two ahead of the capability of the
system for data handling. Its the nature of
the beast: we dont know what our data
demands will be in three years, so how can
we build a system to handle it? For example,
today we can display and interactively
manipulate on a computer screen core photos at different scales, along with thin sections and an FMI borehole image log. In
1994 we couldnt do this and we had to take
a leap to do this.
For me, the overall challenge in the future
is to further optimize our data management,
integration and manipulation processes to
ensure that we do not become slaves to our
data. A lot of data is just the ammunition
used by the geoscientist to better describe
the size and flow characteristics of hydrocarbons within the subsurface in order to more
efficiently find new reserves and optimally
produce existing developments.

How do you measure the efficiency of your


data management in financial terms? Give
an example of how a change in work process or technology provided you with a significant efficiency gain.
Chris Mottershead
We use surveys of our 300 data users to
gauge not only the efficiency, but the success
of our data management approach. Three
years ago, when we started data management outsourcing, our problem was lack of
people, not lack of data quality. Our data
were in good shape when we started outsourcing, so that enabled us to get up to
speed quickly. We cut costs by 50% within
the first year, and improved staff productivity
and end-user satisfaction by 10% per quarter. Satisfaction was measured simply by asking users, Are you happy?
In 1993, we mapped our work flow and
found that we had 500 geoscience applications. By careful winnowing, we cut that to
seven core applications and eight specialist
applications, then we started the long process of building links between those 15
applications. Users at first rebelled at the
thought of giving up technology, but we
were able to convince them that less is
indeed more when it comes to software. We
focused our support resources to make a few
applications work well, rather than many
that limped along. In the end, we dont
believe its valuable to have anyone doing
everything; we encourage them to focus on
the most valuable things. For example, we
now have geoscientists picking tops, not
writing code.
Based on your experience, what are the hidden costs of data management?
Chris Mottershead
We probably spend 20% of our time trying
to close disconnects in our less-than-ideal
data management, mostly with unstructured
data. Now we have data that reside in three
places: with the user, with the team and in
the corporate database. There is some redun-

Oilfield Review

Degree of Operator Intervention in Outsourced


Data Management: A BP Approach
Risk

Example

Who Does Work

Low

Purchase a PC

Contractor only

Moderate

Is this the right


gamma ray log
for this well?

Contractor, following agreed


procedures and goal setting

High, especially
involving health,
safety or
environmental
(HSE) concerns

Deviation survey

Contractor, with close supervision by operating


company to check how the deviation survey
was performed, and how data were entered
in the database.

dancy. Getting rid of duplicate logs, for


example, will reduce cost, but not add value.
Because were talking about the small but
significant percentage of data that is redundant, there is probably limited value in rationalizing the user-team-corporate databases
into a single database.
Alastair Brown
A cost that is sometimes not considered is
that of managing outsourcing of data management. It becomes too easy to make
reports and maps and fancy plots. You suddenly have all these resources at your fingertips and there is a temptation to manipulate
data for the sake of manipulating data.
Managing this outsourcing resource means
you keep everyone focused only on tasks
that grow value.
In making the transition to outsourcing of
data management, what are the foremost
benefits? Foremost pitfalls?
Chris Mottershead and Jane Whitgift
Benefits:
Longevity and security of data.
Outsourcing of data management formalizes data management as a stand-alone

Summer 1997

task, which falls under the purview of a


vendor devoted to that purpose only.
Releasing interpretation people from data
management obligations, allowing them
to focus on geoscience.
Removing islands of data defined by discipline, and allowing for introduction of
standards that permit easier sharing of data
throughout the value chain. This removal
of legacy systems, which tended to be discipline-specific, also allows for easier
sharing of data and interpretations with
partners and contractors.
Pitfalls:
Giving cost reduction too much weight.
Driving down cost needs to be an issue,
but not the only issue. One needs to keep
an eye as well on the creation of longterm value.
Inflated expectations. Any change involves
some initial awkwardness. Shifting data
management outside the company is no
exception. There is a risk that efficiency
and productivity decline, if not well managed, said Chris Mottershead. As responsible users of data, we need to keep track of
problems, and quickly develop solutions.
Initiating a conversation to find a solution
is more productive than feeling victimized
by the fact that a plotter doesnt work.

Believing someone else can sort out your


data better than you can. This is a falsehood. The petrophysicist is ultimately
responsible for data quality. You can delegate the act of data management, but you
cant delegate accountability.
Retaining the ability to manage outsourcing, and the degree of intervention in the
process of data management depends on
the risk associated with the data (left).
In the case of a deviation survey, we
encourage a high degree of intervention,
since erroneous survey coordinates can
mean you drill into an existing well with
potentially catastrophic losses.
Alastair Brown
A chief pitfall is the failure to manage the
outsourcing of data. An analogy is taking a
trip in a fast car: the car is quick and convenient, but it doesnt know where to go. You
need to steer it, otherwise you might end up
at the town dump rather than the bank.
A related pitfall is thinking, in the world of
software, that more and most recent are better. When it comes to software and data handling, fewer, smarter steps with simple systems are better than many dumb steps with
complicated systems. For every minute you
spend manipulating and interpreting data in
cyberspace, you need to spend twice the
time thinking about what you want to obtain
from the data, about what are the most valuable steps to take, not what are the possible
steps to take.
Another perceived pitfall is becoming
reliant upon the vendor, with a resulting
loss in our own skills base. We could not
easily make a U-turn if things do not work
out. I think there are two keys to minimizing this risk. The first is financial. Both parties have to win. The second is people:
there has to be a bond and a mutual trust so
that any issues of conflict are aired immediately, without focus on finger-pointing but
on problem resolution. To achieve this
bond, for us at least, it is tremendously
helpful to have the vendor move in, to live
and work with us to understand our culture,
work processes and requirements.

29

The main benefit of outsourcing is saving


time. We now probably spend 10% of our
time manipulating data, down from about
30% before. I doubt we would go lower
than 10%you need to spend some time
checking, working with and validating the
data, looking them over. Thats still part of
the thinking process.
Another benefit is that in passing data management to the vendor, we also pass along
personnel management. Vendors can more
easily move staffing levels up and down in
response to demand, since they can shuttle
people to other projects.
Which data management functions are
considered core expertise?
Chris Mottershead
Management of any data associated with
a high-risk decision, or affecting HSE,
remains a core expertise. We need to know
exactly how these data are managed, and
monitor the management, but not perform
the task ourselves.
Jane Whitgift
Another core expertise is remaining an
informed buyer of data management services, to distinguish service level provisions
and efficiencies. Data management is like
many other of our activities in which we lean
hard on vendors. We need to keep and cultivate people with in-depth understanding in
key functional areas, such as structural engineering, chemistryand in data management. We dont need a whole department,

30

but a person who stays up to speed enough


to be an informed buyer. For all data management functions, except those of high risk,
we need to know the what and when, but
not the how.
Alastair Brown
We need to be able to follow and verify the
audit trail of dataand not just the deviation
surveys, but also well log data, perforation
intervals and any data that are crucial to the
safety and success of the well. GeoQuest
might do the data handling and formatting
and archiving, but BP remains responsible
for the data. So 10% of our time will remain
occupied with checking and managing this
interface. Perhaps in the future it will shrink
to 5% when GeoQuest gets to know our
data, but it can never go to zero.
What is your data management dream?
How do you imagine your data management in the year 2000 will be different from
what it is today?
Chris Mottershead
There are two interwoven dreams: the technology dream and the human dream. The
technology dream is a shared earth model
a single source that holds all data from all
sources. But it is more than a depository of
data, since a depository does not add value.
What adds value is information about uncertainties inherent in the data, so that you
know what are high-quality data and what
are of lesser certainty. This will allow you to
more fully rationalize different types and
qualities of data in the overall reservoir
model. You cant do this today. At present,
the small subset of high-quality data is diluted by the lower quality data, and you lose
the advantage of the high-quality data.

The human dream is the behavioral


changes needed to achieve integrated learning teams. In such an environment, team
members would go out of their way to share
knowledge. This exchange would be part of
the job, not an added extra. Im not sure
how we encourage this to happen, Chris
Mottershead said. If we knew, wed have
the problem solved!
Alastair Brown
With the tremendous growth in new software and hardware technologies to acquire
and handle increasing quantities of data,
plus the easier routes to manipulate these
data, we have a danger of data overload. The
exponential improvements in technology
allow us to do bigger, better and more things.
But its up to us to focus on adding valueto
use data intelligently and efficiently and not
become slaves to our data. I think the companies that will prosper wont be distracted
by the new toys. They will focus on minimizing high-quality, reliable data sets, minimizing data movement and manipulation,
and maximizing the quality of their interpretation in order to add value.

Oilfield Review

Conoco Inc.

Robert Beham
Senior Geophysical Advisor
Aberdeen, Scotland

Joe Cross
Geophysicist
Lafayette, Louisiana, USA

Jim Sledz
Director, Global Exploration
Information Management
Strategy
Houston, Texas, USA

Conoco is the eleventh-ranked US oil company according to assets, producing 366,000


barrels [58,157 m3] of petroleum liquids per
day, with slightly more than 10% coming
from the Gulf of Mexico.
Conocos operations in the USA have taken
a middle road between the approach of BP
and what PDVSA hopes to achieve. Their
outsourcing is farther along than that of
PDVSA, but not as far as BPs. Conoco has
outsourced data management and application support to GeoQuest at five locations,
and is migrating from an in-house database
running on a VAX to UNIX-based Sun systems that access Oracle databases. About
60% of the work consists of loading data into
workstations, building regional master
databases, providing graphics support and
overseeing archives of physical and digital
data, such as seismic data, logs, cores, cuttings, books and journals. The remaining
40% is software application support. Since
the start of outsourcing in 1996, Conoco estimates that geoscientists spend 5% to 10%
less time looking for and preparing data. A
goal is to reduce this time by 50%.

Summer 1997

What are your top data management


challenges today? What were they three
years ago?
Joe Cross
In the last three years, our data management challenges have changedand in
some ways they have remained the same.
In 1993, we decided to move data off a
central database in Ponca City, Oklahoma,
USA, and push that responsibility out to the
four business units, for efficiency and so they
could develop a stake in productive data
management. At the time, the challenge was
finding a system that could handle the vast
data volume. The second challenge was to
populate the now-dispersed databases with
sources of data other than those in Ponca
City. The business units had to learn how to
secure sources of data that they traditionally
did not have to think about. The third challenge was quality control of data.
Quality control was, and remains, a nontrivial challenge. For example, not every data
vendor calls a well by the same name. For
Offshore Oil Scouts, an API well number
might have 10 or 12 digits, whereas
Petroleum Information might use 14 digits. If
theres a discrepancy between two well identifiers, you cant just lop off the last three or
four numbers; you need to know the information beyond that embedded in the numbering system, such as a company selling a
well to another company, or drilling the well
deeper. When we have a 40-ft [12-m] difference in kelly bushing elevation from PI or
Dwights, we dont know which is right and it
takes some sleuthing to find the right answer.
The second challenge involves moving
data around and handling updates. Part of
the Geoshare promise is fulfilled, but it is not
seamless yet. It is still sometimes a struggle to
move data from the central database to the
project databases, then propagate changes in
project data back to the central database.
Likewise, we get updates and corrections
from data vendors. Finding the trail of that
data, and making corrections to all the
instances, can be difficult.

Robert Beham
Three years ago, there was very little validated, commercial data around in digital
form. Only a few percent of log curve data
for the Gulf of Mexico was available in digital form, so people relied on the old method
of hand-posting. There were 3D seismic surveys, but they were fairly simple data sets
a series of zeros and onesand to manage
these meant having sufficient space and
horsepower to handle a monotone data set.
So the challenge at the time was to develop
digital databases and a methodology for
handling them.
Today, we have the databases, and some of
the data management software. But an
understanding of proper data management is
still evolving. The complexity of data still
presents a problem. For example, we need to
do numerous extensions to tables to cover
areas needed in Petrotechnical Open
Software Corporation (POSC) and Finder
data models.
We have the basic direction and tools. The
harder challenge today is developing an
understanding of the importance of data. The
hardest thing to get anyone to do on a project is to document what they did. To do this,
management needs to provide incentives for
documentation of your workwe need to
capture both the raw and value-added data.
Jim Sledz
Our challenges have changed in three years
because our business drivers have changed.
Three years ago, we worked from year to year
with no long-term perspective. Oil was $17 a
barrel and all we accomplished were tactics
to solve current problems and get to the next
year. Now that pricing pressure is less and we
can think strategically, we can look three to
five years down the road, and when we do
that, we start to think about fixing problemslike data managementthat will
require a long-term commitment.

31

How do you measure the efficiency of your


data management in financial terms? Give
an example of how a change in work process or technology provided you with a significant efficiency gain.
Joe Cross
Moving from a legacy system to a commercial system mainly means we are more
nimblewe can work faster, respond to
opportunities faster. In the past, when I relied
on the central database in Ponca City, I
would call there and ask for base maps for
the area in which I was working. Id get in a
queue with everyone else in the company.
Eventually, they would build my project
database for me: gather seismic traces, shot
points, well numbers and so on. I got what I
needed to start working within about a
week. Now, if I already have the seismic data
loaded, I can be up and working in less than
a day.
Robert Beham
Efficiency of data management is difficult
to express as a business metric, such as more
discoveries. Even if you have the most efficient data management system in the world,
geoscientists might not be able to use it to
make discoveries. Proving the efficiency of a
new data system is largely out of the hands
of the data people.
Jim Sledz
Decreases in cycle time and in perceived
risk are to me the key metrics. By perceived
risk, I mean not only fewer dry holes, but an
ability to better understand the risk factors of
each prospect. We may not be able to quantify the risk, other than feeling that we have a
better understanding of its order of magnitude. I dont think I can directly or realistically relate data management efficiency to
barrels of oil produced.

32

Based on your experience, what are the hidden costs of data management?
All
Taking hidden costs to mean those that
are not normally acknowledged or tracked,
there are many hidden costs to data quality
assurance.
Propagation of error detection. We tend to
load data once, from which point they
may be served to multiple projects. If an
error is detectedsuch as the navigation
data in a seismic set being off by 2one
project may get the correction, but not all.
This results in a cascade of error.
Removing all interpreters from quality
assurance. Geoscientists need to be in the
data quality control loop. Data managers
are fine at loading data in the right
placethey can spot errors like a
15,000-ft [4570-m] log curve placed in a
well only 8000 ft [2440-m] deepin fact
the Finder system will flag errors in constraints like this. But information technology (IT) people cant always tell if a fossil
is placed in the wrong horizon. This
involves time in meat-space not in
cyberspace, Joe Cross said. You need to
talk to people, find the experts, work in
the real world to solve problems like this.
The solutions are still in peoples minds,
not on-line. In our approach, we dont
want every geoscientist QCing all data,
but a handful of geoscientists working
side-by-side with IT people. Jim Sledz
said, Geologists and geophysicists still
need to be in the loop.
Data quality control always takes longer
than predicted. It will humble you to realize what has to happen before you can
point to a seismic section and say here is
a gas zone, said Joe Cross. You have
seismic from Geco-Prakla, well surface
location data from Petroleum Information,
a directional survey from DDI, a velocity
survey from Velocity Databank, tops from
PDI and well log curves from QC Data.
Thats six vendors. For an interpreter to
point to an intersection of a seismic trace
and a well, all six vendors have to be spoton in putting the right data in the right
locationand they all have to agree on
location conventions.

Data maintenance. Most people stop


thinking about the cost of data once loading is complete, but data require continuous maintenance. If I get another tape of
new data that supplements or updates
what we have, said Jim Sledz, that still
costs a fair bit to load and make sure it does
not cover up edited Conoco information.
In making the transition to outsourcing of
data management, what are the foremost
benefits? Foremost pitfalls?
All
Benefits:
Importing a growing body of data management expertise. Vendors bring in expertise
that we did not have, such as knowledge
of Structured Query Language, and how to
structure Oracle databases.
More appropriate and flexible staffing. By
having GeoQuest examine our data
needs, they can more accurately determine the staffing level needed, and can
more easily move people on and off the
project as needed.
Were getting results: data are being loaded,
systems we predicted building are being
built; physical libraries are being built and
put in order.
Pitfalls:
Loss of expertise. We hate to lose the
expertise we once had, and to see
resourceful people leave the company.
Dip in efficiency. The vendor needs to
learn our data management history and
how we did things. No matter how well
you plan, Joe Cross said, you will have
a brownout during the transition. To
shorten this period, data managers need
to work closely with interpreters, at least
until a critical mass of validated data
is assembled.
Steep learning curve. The E&P industry
does not have a long history of data man-

Oilfield Review

agement skills, so there are few true


experts. We are all, operators and vendors,
learning as we go.
Determine the pricing structure based on
the tasks. With on-going work, such as a
help desk or library filing, paying per person makes sense. But for projects, it is
probably more economic to negotiate a
per-project cost.
Which data management functions are
considered core expertise?
All
Data management in Conoco today is confined to four areas: large-scale project management, management of outsourcing, contract management and setting data management strategy (which involves defining the
data processes, such as how loading is carried out and how correction factors are
applied). The company has moved toward
the position of setting goals and establishing
techniques, and overseeing implementation
rather than performing implementation.
My opinion has evolved over the last three
years, said Joe Cross. At the outset, outsourcing didnt seem like a good idea to me.
I felt it was critical that we guard the bin, so
to speak. We had our own people manipulating, moving, sifting and dispensing data.
Now that Ive seen the new system work, I
can see its benefits in new expertise, and in
more flexible headcount. However, some
degree of data QCing remains part of my
job. For example, if GeoQuest is loading or
mapping with data that Ive worked on for 17
years, I have a pretty good idea whats right
and whats not.
In Conoco, we think of data management
as lying along a continuum, and the closer a
function comes to interpretation, the more it
remains a core function. We have a position
called geodata specialist. These specialists
work in an exploration or exploitation team,
setting up projects, generating base maps,
and loading data. They have a direct interac-

Summer 1997

tion with geoscientists and engineers. Their


role is part of the exploration process, not
just an information management process, so
it is considered core. But around the corner
are people who just load data or just manage
physical records, and their role is not considered core. Generally, if we are dealing
with simple facts, we are farther from core; if
we are dealing with information about facts,
were closer to core. Project management of
data management is a core function, but
data management itself is not, said Robert
Beham. Project management needs to be a
full-time job, and perhaps by more than one
manager per office.
What is your data management dream?
How do you imagine your data management in the year 2000 will be different from
what it is today?
Joe Cross
Seamless and transparent integration
across platforms. I want to go into the Finder
system, draw a rectangle on a map and
everything that pertains to that rectangle is
put into my Openworks project.
Robert Beham
Recording data provenance. I would like
to see clean data residing in a secure, central location, where the data pedigree is
easily determined.
Capture of value-added data. Today we
dont capture interpretations in a way that
allows them to be easily verified or built
upon. There isnt sufficient documentation to explain a rationale for picking tops
or net gross for sandstone. Anyone who
examines the data cant reconstruct the
interpretations, so they tend to be
trashed. We need effective documentation to build collective corporate knowledge grounded with evidence, instead of
I remember when...
Today, there is a small group of people who
are keen to document what they do and
make it available to the next interpreter. We
need to institutionalize that spirit. If I have a

source rock distribution map thats just paper


in a folderthat works OK, if people know
to go to the file. But it would be better to
have it digitally, and be attached to a metadata lista list of data available.
Jim Sledz
In a single sitting, Id like to be able to
determine what data are available for a project, both internally and externally, and get
them loaded within a couple of hours
excepting the physical data, which might
take a few days. Now I might have to go to
nine people, or tap into 15 systems. There is
tremendous value for me if I can do it all in
one stop.
Common Threads in Data Management

This brief survey shows that even companies


of diverse cultural backgrounds and at different points in their development efforts
share some perspectives:
Quantifying the payback: Operating companies had, to varying degrees, metrics in
place to gauge costs of conventional data
management. In retooling and restructuring data management, new metrics need
to be established to match the demands of
the new system.
Cross-platform compatibility: There is a
shared vision of a single-point of contact
with all kinds of datadiverse types (physical, digital, images, text), different generations in the interpretation cycle and from
different disciplines.
Walking the fine line: There is internal
discussion of which functions and skills
the operating company retains and which
are delegated to a service partner. Even
when data management is largely shifted
outside, there is debate about the best
way to maintain the expertise necessary
to oversee data management and plan its
strategic direction.
Quality control: Data quality remains a
central concern. Any improvement in
work process is incomplete unless it also
addresses means to improve and maintain
data quality.
JMK

33

How to Use Borehole


Nuclear Magnetic Resonance
It is a rare event when a fundamentally new petrophysical logging
measurement becomes routinely available. Recent
developments in nuclear magnetic resonance
measurement technology have widened the
scope of formation fluid characterization. One
of the most significant innovations provides
new insight into reservoir fluids by
partitioning porosity into fractions
classed by mobility.

David Allen
Steve Crary
Bob Freedman
Sugar Land, Texas, USA
Marc Andreani
Werner Klopf
Milan, Italy
Rob Badry
Calgary, Alberta, Canada
Charles Flaum
Bill Kenyon
Robert Kleinberg
Ridgefield, Connecticut, USA
Patrizio Gossenberg
Agip S.p.A.
Milan, Italy
Jack Horkowitz
Dale Logan
Midland, Texas, USA
Julian Singer
Caracas, Venezuela
Jim White
Aberdeen, Scotland

34

For help in preparation of this article, thanks to Greg


Gubelin, Schlumberger Wireline & Testing, Sugar Land,
Texas, USA; Michael Herron, Schlumberger-Doll
Research, Ridgefield, Connecticut, USA; James J.
Howard, Phillips Petroleum Research, Bartlesville,
Oklahoma, USA; Jack LaVigne, Schlumberger Wireline
& Testing, Houston, Texas; Stuart Murchie and Kambiz A.
Safinya, Schlumberger Wireline & Testing, Montrouge,
France; Carlos E. Ollier, Schlumberger Wireline & Testing, Buenos Aires, Argentina; and Gordon Pirie, Consultation Services, Inc., Houston, Texas.
AIT (Array Induction Imager Tool), APT (Accelerator Porosity Tool), CMR and CMR-200 (Combinable Magnetic Resonance), ELAN (Elemental Log Analysis), ECS (Elemental
Capture Spectrometer for geochemical logging), EPT
(Electromagnetic Propagation Tool), Litho-Density, MAXIS
(Multitask Acquisition and Imaging System), MDT (Modular Formation Dynamics Tester), MicroSFL, RFT (Repeat
Formation Tester Tool) and PLATFORM EXPRESS are marks of
Schlumberger. MRIL (Magnetic Resonance Imager Log) is
a mark of NUMAR Corporation.

Nuclear Magnetic Resonance (NMR) logging is creating excitement in the well logging community. Within the last year, two
issues of The Log Analyst were devoted
exclusively to NMR.1 The Oilfield Review
published a comprehensive article explaining borehole NMR technology less than
two years ago. 2 Recently, many special
workshops and conference sessions on
NMR have been held by professional logging societies. Today, Internet Web sites
provide current information on NMR logging advances.3
Why all the excitement? The reasons are
clear. First, the tools used to make highquality borehole NMR measurements have
improved significantly. The quality of measurements made in the field is approaching
that of laboratory instruments. Second, these
measurements tell petrophysicists, reservoir
engineers and geologists what they need to
knowthe fluid type and content in the
well. The measurements also provide easyto-use ways to identify hydrocarbons and
predict their producibility. Finally, despite

Oilfield Review

the mysterious nature of the NMR technique, the measurement principles are relatively easy to understand.
Important advances have been made in
applying NMR measurements to detecting
and differentiating all formation fluids, such
as free water and bound water, as well as differentiating gas from oil in hydrocarbonbearing reservoirs. In this article, we review
the improvements in tool technology that
allow todays tools to measure different
porosity components in the formation (see
What is Sandstone Porosity, and How Is It
Measured?, page 36). Then, we evaluate the
new, high-speed, cost-effective ways NMR
can be used with conventional logging measurements to determine critical formation
properties such as bound-water saturation
and permeability for predicting production.
Finally, we show how NMR measurements,
in combination with other logging data, provide a more accurate, quantitative and therefore profitable understanding of formations
including shaly gas sands and those containing viscous oil.

Summer 1997

A Rapidly Developing Technology

The first NMR logging measurements were


based on a concept developed by Chevron
Research. Early nuclear magnetic logging
tools used large coils, with strong currents, to
produce a static magnetic field in the formation that polarized the hydrogen nuclei
protonsin water and hydrocarbons.4 After
quickly switching off the static magnetic
field, the polarized nuclei would precess in
the weak, but uniform, magnetic field of the
earth. The precessing nuclei produced an
exponentially decaying signal in the same
coils used to produce the static magnetic
field. The signal was used to compute the
free-fluid index, FFI, that represents the
porosity containing movable fluids.
These early tools had some technical deficiencies. First, the sensitive region for resonance signal included all of the borehole
fluid. This forced the operator to use special
magnetite-doped mud systems to eliminate
the large borehole background signalan

expensive and time-consuming process. In


addition, the strong polarizing currents
would saturate the resonance receiver for
long periods of time, up to 20 msec. This
diminished the tool sensitivity to fast-decaying porosity components, making early
tools sensitive only to the slow free-fluid
parts of the relaxation decay signal. These
tools also consumed huge amounts of
(continued on page 39)
1. The Log Analyst 37, no. 6 (November-December,
1996); and The Log Analyst 38, no. 2 (March-April,
1997).
2. Kenyon B, Kleinberg R, Straley C, Gubelin G and
Morriss C: Nuclear Magnetic Resonance Imaging
Technology for the 21st Century, Oilfield Review 7,
no. 3 (Autumn 1995): 19-33.
3. The SPWLA provides an extensive bibliography of
NMR publications developed by Steve Prensky, U.S.
Department of Interior Minerals Management Service,
located at URL: http://www.spwla.org/, and information on the Schlumberger CMR tools can be found at
http://www.connect.slb.com.
The NUMAR information Web site is located at URL:
http://www.numar.com.
4. Early NMR tools did not contain permanent magnets
to polarize the spinning protons.

35

What Is Sandstone Porosity, and How Is It Measured?

Simply put, porosity is the void space in all rocks


where fluids accumulate. In igneous rocks, this
space is quite small because the crystallization
growth process results in dominant interlocking
grain contacts. Similar arguments can be made for
metamorphic rocks. In contrast, sandstones are

Inch

mm

formed by the deposition of discrete particles creat-

Boulders

ing abundant void space between individual particle


500
300
10.0
200

large

formations. These rocks can be formed by the weath-

100

small

ering and erosion of large mountains of solid rock,

50

very coarse

20

coarse

through various processes. As the weathered parti-

10

medium

cles are carried farther from the source, a natural

fine

grains.
Obviously, hydrocarbons are found only in porous

with the eroded pieces deposited by water and wind

sorting of particle or grain size occurs (right). Geol-

1.0

0.1

ogy attempts to study how variations in original


tions are altered by post-depositional processes,

0.5
0.01

whether they be purely mechanical or geochemical,

0.2

very coarse
coarse
medium

0.05

During transportation, the smallest weathered par-

fine

0.001

coarse
medium

mica, which is made of sheets of aluminosilicates,

0.005 very fine

carried great distances. These sheet silicates give


rise to clay minerals, which are formed by weathering, transportation and deposition. Clays can also
form in fluid-filled sediments through diagenetic processeschemical, such as precipitation induced by
solution changes; or biological, such as by animal

Silt

fine

get carried the farthest. Other minerals, such as

0.01

break down quickly through erosion, and are also

Sand

very fine

or some mixture of the two.


ticles of rock, such as fine-grained sands and silts,

Gravel

very fine
2
1

porosity created by different grain packing configura-

Cobbles

coarse

0.0001
0.001

medium

Clay

fine

How weathering and transportation cause changes in scale of grain size. The decomposition of large rocks leads to
the development of clastic sedimentary deposits. Water and wind carry the finer grained materials the farthest from their
source. Many materials that are resistant to water and chemical alteration become sand and silt grains eventually
deposited in sediments. Other layered-lattice minerals in the original igneous rocks, such as micas and other silicates,
become transformed into fine-grained clays through degradation and hydrothermal processes.

burrows; or physically through compaction, which


leads to dewatering of clays. The final formation
porosity is determined by the volume of space
between the granular material (next page, top).

36

Oilfield Review

,,



,
,

,

,,



,



,



,,,
,


,


,



,





Free
water

Sand
grains

Capillarybound
water

Claybound
water

Intergranular porosity. The


pores between these waterwet sand grains are occupied
by fluids and fine layers of
clay. The irreducible water
(dark blue) is held against the
sand grains by surface tension and cannot be produced.
Clay-bound water (shaded
dark blue) is also unproducible. Larger pores can
contain free water (light
blue), and in some cases
there are pockets of oil
(green) isolated from the
sand grains by capillary
water. The clay particles, and
their associated clay-bound
water layer, effectively reduce
the diameter
of the pore throats. This
reduces the formations ability
to allow fluids to flowpermeability.

T2 signal
distribution

Oil

But, all porosity is not equivalent. Clearly, what is


special about NMR is that it not only measures the
volume of the void space, assuming that it is filled
with a hydrogenated fluid, but also allows some inferences to be made about pore size from measurement
of relaxation rates. This is the ability to apportion the
porosity into different components, such as movable
fluids in large pores and bound fluids in small pores.
In sandstone formations, the space surrounding
pores can be occupied by a variety of different mineral grains. In the simple case of well-sorted, waterwet sandstones, water that is adhering to the surface
of the sand grains is tightly bound by surface tension.
Frequently, in these formations, the spaces between
sand grains are filled with clay particles. Water also
attaches itself to the surfaces of clay particles, and
since clays have large surface-to-volume ratios, the
relative volume of clay-bound water is large. This
water will always remain in the formation, and is
known as irreducible water. In pure sands, this is also
known as capillary-bound water. All exposed mineral

Low permeability, nonproducer

surfaces have adsorbed water, which link particle size

Porosity = 20%
Permeability = 8 md

with volume of irreducible water.


The NMR measurements tell us two important
things. The echo signal amplitudes depend on the
volume of each fluid component. The decay rate, or
T2 for each component, reflects the rate of relaxation,

T2 time
Increasing relaxation time

which is dominated by relaxation at the grain surface. T2 is determined primarily by the pore surfaceto-volume ratios. Since porosities are not equal
capillary-bound or clay-bound water are not
producible, but free water istwo equal zones of
porosity, but with entirely different producibility
potential, can be distinguished by their T2 time distributions (left).

T2 signal
distribution

Hydrogen nuclei in the thin interlayers of clay


High permeability, producer

water experience high NMR relaxation rates,

Porosity = 19.5%
Permeability = 280 md

because the water protons are close to grain


surfaces and encounter the surfaces frequently.
Also, if the pore volumes are small enough that the
water is able to diffuse easily back and forth across
the water-filled pore, then the relaxation rates will
T2 time

Increasing relaxation time

simply reflect the surface-to-volume ratio of the


pores. Thus, water in small pores with larger surface-to-volume ratios has fast relaxation rates and
therefore short T2 porosity components (next page,

Good water and bad water. The amplitude of the NMR T2


measurement is directly proportional to porosity, and the decay rate
is related to the pore sizes and the fluid type and viscosity
in the pore space. Short T2 times generally indicate small pores with
large surface-to-volume ratios and low permeability, whereas longer
T2 times indicate larger pores with higher permeability. Measurements were made in two samples with about the same
T2 signal amplitudes, indicating similar porosity, but with considerably different relaxation times that clearly identify the sample with
the higher permeability.

Summer 1997

top left).

37

Rock grain
Crude oil

B
A

~10

Water

Interlayer
water

Oxygen

Hydrogen

Cations

Aluminum

~3 to 5

Silicon

Interlayer water and hydroxyls in clay structures. Clay minerals are hydrated silicates of
aluminum which are fine grained, less than 0.002 mm. The layers are sheet structures of
either aluminum atoms octahedrally coordinated with oxygen atoms and hydroxyls [OH]-,
or silica tetrahedral groups. These octahedral (A) and tetrahedral (B) sheets link together to
form the basic lattice of clay minerals, either a two layerone of each sheet (AB), or a
three layer (BAB) structure. In smectite clay, these lattices sheets are then linked together
by cations and interlayer water molecules. The hydroxyls are seen as porosity by all neutron tools, but not NMR tools. The interlayer water, trapped between sheets of the clay lattice, is not producible.

Oil in the pore space of a water-wet rock. The lack of


contact between oil and the rock grain surface allows
the oil to take its bulk relaxation time, which for lowviscosity oils will generally be longer than
the shortened water relaxation.

rately determined (see Gas-Corrected Porosity from

Density-Porosity and CMR Measurements, page


54). Usually, one assumes the grain density for
sandstone or limestone and water-filled pores. Errors
occur if the wrong grain density is assumeda
lithology effector the wrong fluid density is
assumed, which occurs with gas-filled pores.

On the other hand, in large pores with smaller sur-

tron porosity tools is sensitive to the total hydrogen

face-to-volume ratios, it takes longer for the hydro-

concentration of the formation, which leads to their

Using Porosity Logs

gen to diffuse across the pores. This will decrease

porosity response (next page, top).

Comparing one porosity measurement with another

the number of encounters with the surface and lower

However, there are complications. Small differ-

leads to new information about the makeup of the for-

the relaxation rateleading to a longer T2 compo-

ences in the other elemental neutron cross sections

mation. Traditionally, neutron and density-porosity

nent in the NMR measurement. Free water, found in

lead to changes in the porosity response for different

logs are combined, sometimes by simple averaging.

large pores, is not strongly bound to the grain sur-

minerals, called the lithology effect. There is also the

In many cases, the lithology effects on the neutron

faces by surface tension. Longer T2 time components

effect of thermal absorbers, especially in shales,

porosity tend to cancel those on the density porosity,

reflect the volume of free fluid in the formation.

which cause large systematic increases in the poros-

so that the average derived porosity is correct. If there

ity response of thermal neutron tools. Fortunately,

are only two lithologies in a water or oil filled forma-

seen by NMR is the case of oil trapped inside a

epithermal neutron porosity tools, such as the APT

tion, then porosity and the fraction of each rock min-

strongly water-wet pore (above right). Here the oil

Accelerator Porosity Tool, are immune to this effect.

eral can be determined using a crossplot technique. In

Another example of long T2 time fluid components

molecules cannot diffuse past the oil-water interface

Finally, because the neutrons cannot discriminate

gas zones, neutron-derived porosity reads low, if not

to gain access to the grain surface. As a result, the

between hydrogen in the fluids or hydrogen that is

zero, whereas density-derived porosity reads slightly

hydrogen nuclei in the oil relax at their bulk oil rate,

an integral part of the grain structure, neutron tools

high. This leads to the classic gas crossover signa-

which is usually slow depending on the oil viscosity.

respond to the total hydrogen content of the forma-

turea useful feature.

This leads to a good separation of the oil and water

tion fluids and rocks. Even after all the clay-bound

signals in NMR T2 distributions.1

water and surface water are removed, clay minerals


contain hydroxyls [OH]- in their crystal structures

tion. Since fluids confined to small pores near sur-

kaolinite and chlorite contain [OH]8 and illite and

large pores have large T2 relaxation times, partition-

down of fast neutrons in formations lead to the

smectite contain [OH]4making them read espe-

ing the T2 distributions allows discrimination between

detection of either thermal or epithermal neutrons,

cially high to neutron porosity tools.2

the different fluid components (next page, bottom).

Neutron tools have traditionally been used for


porosity measurements. The scattering and slowing

depending on the tool design. Hydrogen has an

Density tools use gamma rays to determine poros-

NMR T2 distributions provide for fluid discriminafaces have short T2 relaxation times and free fluids in

Adding the amplitudes of the observed fluid T2 com-

extremely large scattering cross section, and

ity. Gamma ray scattering provides an accurate mea-

ponents together gives a total NMR porosity which

because of its mass, it is particularly effective at

surement of the average formation bulk density, and

usually agrees with the density porosity in water-filled

slowing down neutrons. Thus, the response of neu-

if the formation grain and fluid densities are assumed

formations. In gas zones, like the neutron porosity,

correctly, the total fluid-filled porosity can be accu-

38

Oilfield Review

Neutron Porosity

NMR Porosity
T2

Free
Water and Oil

Sand
Capillary
Water

Capillary
Water

Free Water
and Oil
T2

Free
Water and Oil

Silt

Capillary
Water

Capillary Free Water


and Oil
Water

Kaolinite T2
Chlorite
[OH]4
Interlayer
Hydroxyls Water

Surface
Water

Illite
Smectite

[OH]8
Interlayer
Hydroxyls Water

T2

Surface
Water

Surface and
Clay Bound
Water

T2
Distribution

0.3

Surface and
Clay Bound
Water

What porosity tools


measure. NMR porosity tools can discriminate between capillary-bound or
clay-bound fluids by
their short T2 components and free isolated
fluids with longer T2
components (top
right). By contrast,
neutron porosity tools
are sensitive to the
total hydrogen content
of the formation (left),
and cannot distinguish
between fluids of different mobility.

Sandstone

3.0

33

3000

T2, msec

Claybound
water

Producible fluids

Capillarybound
water

Total CMR porosity


3-msec CMR porosity
CMR free-fluid porosity

NMR T2 time distributions provide clear picture of the fluid components. In


water-filled sandstone formations, the T2 time distribution reflects the pore size
distribution of the formation. Shorter T2 components are from water that is close and
bound to grain surfaces.

NMR depends on the total hydrogen content, and


therefore reads low. This leads to an NMR gas crossover signature.

Summer 1997

1. Kleinberg and Vinegar, reference 12 main text.


2. The protons, which are part of the hydroxyls found in the
clays, are so tightly bound in the crystal structure that the
NMR decay is much too fast to be seen by any borehole NMR
logging tool.

power to tip the polarized spinning hydrogen nuclei and were not combinable with
other logging tools.
Sparked by ideas developed at Los
Alamos National Laboratories in New
Mexico, USA, the application of NMR technology in the oil field took a giant leap forward in the late 1980s with a new class of
NMR logging toolsthe pulse-echo NMR
logging tools. Now, polarizing fields are
produced with high-strength permanent
magnets built into the tools (see Pulse-Echo
NMR Measurements, page 44).5
Two tool styles are currently available for
commercial well logging. These tools use different design strategies for their polarizing
fields. To gain adequate signal strength, the
NUMAR logging tool, the Magnetic
Resonance Imager Log (MRIL), uses a combination of a bar magnet and longitudinal
receiver coils to produce a 2-ft [60-cm] long,
thin cylinder-shaped sensitive region concentric with and extending several inches
away from the borehole.6
The Schlumberger tool, the CMR Combinable Magnetic Resonance tool, uses a directional antenna sandwiched between a pair of
bar magnets to focus the CMR measurement
on a 6-in. [15-cm] zone inside the formationthe same rock volume scanned by
other essential logging measurements (next
page).7 Complementary logging measurements, such as density and photoelectric
cross section from the Litho-Density tool,
dielectric properties from EPT Electromagnetic Propagation Tool, microresistivity
from the MicroSFL and epithermal neutron
porosity from the APT Accelerator Porosity
Tool can be used with the CMR tool to
enhance the interpretation and evaluation of
formation properties. Also, the vertical resolution of the CMR measurement makes it
sensitive to rapid porosity variations, as seen
in laminated shale and sand sequences.
5. Murphy DP: NMR Logging and Core Analysis
Simplified, World Oil 216, no. 4 (April 1995):
65-70.
6. Miller MN, Paltiel Z, Gillen ME, Granot J and Bouton
JC: Spin Echo Magnetic Resonance Logging: Porosity
and Free Fluid Index Determination, paper SPE
20561, presented at the 65th SPE Annual Technical
Conference and Exhibition, New Orleans, Louisiana,
USA, September 23-26, 1990.
7. Morriss CE, Macinnis J, Freedman R, Smaardyk J,
Straley C, Kenyon WE, Vinegar HJ and Tutunjian PN:
Field Test of an Experimental Pulsed Nuclear Magnetism Tool, Transactions of the SPWLA 34th Annual
Logging Symposium, Calgary, Alberta, Canada, June
13-16, 1993, paper GGG.

39

Permanent magnet
Bowspring
eccentralizer

Borehole wall

Electronic
cartridge

14 ft

Antenna

CMR skid
Wear plate
6 in.

Permanent magnet
Sensitive zone

CMR tool. The CMR tool (left) is 14 ft [4.3 m] long and is combinable with many other
Schlumberger logging tools. The sensor is skid-mounted to cut through mud-cake and
have good contact with the formation. Contact is enhanced by an eccentralizing arm or
by power calipers of other logging tools. Two strong internal permanent magnets provide
a static polarizing magnetic field (right). The tool is designed to be sensitive to a volume
of about 0.5 in. to 1.25 in. [1.3 cm to 3.2 cm] into the formation and stretches the length of
the antennaabout 6 in. [15 cm], providing the tool with excellent vertical resolution.
The area in front of the antenna does not contribute to the signal, which allows the tool to
operate in holes with a limited amount of rugosity, similar to density tools. The antenna
acts as both transmitter and receivertransmitting the CPMG magnetic pulse sequence
and receiving the pulse echoes from the formation.

NMR in the Borehole

Borehole NMR measurements can provide


different types of formation porosity-related
information. First, they tell how much fluid
is in the formation. Second, they also supply
details about formation pore size and structure that are usually not available from conventional porosity logging tools. This leads
to a better description of fluid mobility
whether the fluid is bound by the formation
rock or free to flow. Finally, in some cases,
NMR can be used to determine the type of
fluidwater, gas or oil.

40

The NMR measurement is a dynamic one,


meaning that it depends on how it is made.
Changing the wait time affects the total
polarization. Changing the echo spacing
affects the ability to see diffusion effects in
the fluids. Transverse relaxation decay times,
T2, depend on grain surface structure, the
relaxivity of the surfaces and the ability of
the Brownian motion of the water to sample
the surfaces. In some cases, when the pore
fluids are isolated from surface contact, the
observed relaxation rate approaches the bulk
fluid relaxation rates.
The first pulsed-echo NMR logging tools,
introduced in the early 90s, were unable to
detect the fast components of the resonance
decay. The shortest T2 was limited to the 3-

to 5-msec range, which allowed measurement of capillary-bound water and free fluids, together known as effective porosity.8
However, clay-bound water, being more
tightly bound, is believed to decay at a
much faster rate than was measurable with
these tools. Within the last year, improvements in these tools enable a factor-of-ten
faster decay rate measurement. Now measuring T2 decay components in the 0.1 to
0.5 msec range is possible. These improvements include electronic upgrades, more
efficient data acquisition and new signalprocessing techniques that take advantage
of the early-time information.
For example, NUMAR added a multiplexed timing scheme to their standard tools
to boost the signal-to-noise ratio for fastdecay modes. This was achieved by combining a standard pulse-echo trainconsisting of 400 echoes with an echo spacing of
1.2 msecand a rapid burst of short echo
trainlets of 8 to 16 echoes with half the standard echo spacing.9 This pulse sequence is
repeated 50 times to reduce the noise by a
factor of seven. Now, this tool is sensitive to
transverse decay components with T2 as
short as 0.5 msec.
The Schlumberger CMR tool has also had
hardware improvements and signal-processing upgrades.10 The signal-to-noise per echo
has been improved by 50% in the new resonance receiver. Also, the echo acquisition
rate has been increased 40%, from
0.32-msec spacing to 0.2 msec, increasing
the CMR ability to see fast decay times (next
page). In addition, the signal-processing software has been optimized for maximum sensitivity to the short T2 decays. As a result, a
new pulsed-echo tool, called the CMR-200
tool, can measure formation T2 components
as short as 0.3 msec in continuous logging
modes and as short as 0.1 msec in stationary
logging modes.
Total Porosity

NMR measurements now have the ability to


see more of the fluids in the formation,
including the sub-3-msec microporosity
associated with silts and clays, and intraparticle porosity found in some carbonates.
Therefore, the measurement provided by
NMR tools is approaching the goal of a
lithology-independent total porosity measurement for evaluating complex reservoirs.
Total porosity using NMR T2 decay amplitudes depends on the hydrogen content of
the formation, so in gas zones, NMR poros-

Oilfield Review

CMR Small Pore Bound Fluid


CMR Capillary
Bound Fluid
Additional BFV
CMR-200

CMR Capillary-Bound Fluid


CMR Very Small Pore Bound Fluid

Additional BFV
CMR-200

Total CMR Porosity 200 usec

BFV, TE 0.2 msec

Total CMR Porosity 280 usec

BFV, TE 0.28 msec

CMR 3 msec Porosity

Depth, BFV, TE 0.32 msec


m 0
50 60
p.u.

T2 Distribution
0.3

3000

T2 Cutoff

CMR Free Fluid


p.u.

msec

Zones

X160

CMR Echo spacing


0.32 msec

X170

CMR-200 Echo
spacing 0.28 msec
X180

CMR-200 Echo
spacing 0.20 msec
CMR free fluid
B
X190

How increased echo rate improves CMR ability to see early-time decay from small pores.
CMR tool was run in a shallow Cretaceous Canadian test well at three different echo
spacings-0.32 msec, 0.28 msec and 0.20 msec. As echo spacings decrease, the total
observed porosity (middle track) read by the tool increases in the shaly intervals, containing small pores, because the ability to see the sub-3 msec T2 components (right track)
increases with echo rate. This is verified by the increased CMR-200 bound fluid volume
(BFV) curves (left track). In the two sand zones A and B, the long T2 components seen in
the time distributions correspond to increases in observed CMR free fluid (middle track).

ity reads low because the hydrogen density


in gas is less than in water or oil and there is
incomplete gas polarization. The difference
between total NMR porosity and density
porosity logs provides an indicator of gas.
Other applications based on NMR porosity discrimination are permeability logs and
irreducible water saturation. In the future,
the improved T2 sensitivity of NMR porosity
logs may permit accurate estimates of claybound-water volumes for petrophysical
interpretation, such as the calculation of
hydrocarbon saturation through Dual-Water
or Waxman-Smits models.
A simple example from South America
illustrates the improved sensitivity to shale
resulting from the increased ability to measure fast-decay components (next page, top
left). Density-derived porosity was calculated
assuming a sandstone matrix density of
2.65 g/cm3, and thermal neutron porosity was
computed also assuming a sandstone matrix.
The CMR-200 T2 distributions shown in
track 3 were used to compute total porosity,
TCMR; a traditional CMR 3-msec effective
porosity curve, CMRP, shown in track 2; and
a 12-msec bound-fluid porosity BFV curve,
based on the fast-decaying portion of the T2
distribution, shown in track 1.
All the porosity logs in track 2 agree in
Zone B, indicating a moderately clean
water-filled sandstone reservoir. This shalefree zone makes relatively little contribution
to the relaxation time distribution below the
3-msec detection limit of the early pulseecho CMR tools. However, in the shale,
Zone A, the picture changes. Here, the bulk
of the porosity T2 response shifts to a much
shorter part of the T2 distribution that is easily seen in track 3. In the shale zone, the
new CMR-200 total porosity curve in track 2
is sensitive to the fast-decaying components
and agrees well with the density porosity.
The older CMR 3-msec porosity in track 2
misses the fast-decaying components of the
T2 distribution between 0.3 and 3 msec,
and therefore reads 10 p.u. lower in the
shale zone.
The large sub-3-msec porosity contribution suggests that the shales contain clay
minerals with a high bound-water content.
8. Miller et al, reference 6.
9. Prammer MG, Drack ED, Bouton JC, Gardner JS,
Coates GR, Chandler RN and Miller MN: Measurements of Clay-Bound-Water and Total Porosity by
Magnetic Resonance Logging, paper SPE 36522, presented at the 1996 SPE Annual Technical Conference
and Exhibition, Denver, Colorado, USA, October 6-9,
1996.

Summer 1997

41

Neutron Porosity
12 msec BFV

Density Porosity
p.u.
25
Borehole
TCMR
6
16
in.
Gamma Ray
3 msec CMRP
Depth,
0
API
200 50
ft 80
p.u.

Vgas

T2 Distribution
0.3
msec 3000

p.u.

Neutron Porosity
25

3 msec

API

Depth,
ft

X520

A
X530

Shale
Zone

16

Total CMR Porosity

Gamma Ray

Zones
Zone

in.

T2 Distribution
msec 3000

Density Porosity

Borehole

0.3

200

Gas Corrected
Porosity
p.u.
30
0

3 msec
0.3

msec

3000

A
XX410

X540
XX420
X550

XX430

Gas

X560

XX440

Total porosity logging with the PLATFORM EXPRESS tool differentiates sands and shales. The porosity logs are shown in track 2 of
the wellsite display. Both neutron and density porosity were
derived assuming a sandstone matrix. Total CMR porosity
(TCMR) correctly finds the tightly bound shale porosity seen in
the short T2 distributions shown in track 3. The neutron porosity
log reads too high in the shale interval, Zone A, due to neutron
absorbers in the shale. The gamma ray and bound-fluid porosity,
BFV (all porosity with T2 below 12 msec) in track 1 show that the
CMR measurement provides an alternative method for identifying shale zones.

The thermal neutron log is reading too high


in the shale zone because of the large thermal absorption cross section of the shale,
probably caused by some trace absorption
elements such as boron or gadolinium associated with the clays.
In track 1, there is a strong correlation
between the gamma ray log and the boundfluid porosity curve, BFV, obtained using
everything below a 12-msec T2 cutoff. This
suggests another interesting application for
total porosity measurementsthe porosity
associated with short T2 components can
provide a good shale indicator that is independent of natural radioactivity in the formation. This is significant because there are
many important logging environments with
clean sands that contain radioactive minerals. In these environments, gamma ray logs

42

Using total CMR porosity and density to find gas. In track 2 the
deficit between total porosity (red curve) and density porosity
(blue curve) in a shaly sand can be used to identify a gas zone.
The traditional neutron-density crossover is suppressed by the
shaliness, which opposes the gas effect in the thermal-neutron
log (green curve). The T2 time distributions show large contributions from short relaxation times below 3-msec coming from the
clay-bound water in the shales. The gas corrected porosity,
(dashed black curve) is always less than the density porosity
and greater than the total CMR porosity.

are not useful for differentiating sands and


shales. At best, gamma ray logs are only
qualitative shale indicators, and are usually
used to estimate clay corrections used for
computing effective porosity.
Finding Gas in Shaly SandsA south Texas
well illustrates the value of total CMR porosity logging in detecting gas in shaly sand formations. The interval consists of a shale
overlying a shaly gas sand (above right).
Looking for gas with the traditional thermal
neutron-density crossover is unreliable or
impossible in shaly formations, because
thermal neutron absorbers in shales force
the thermal neutron porosity tools to read
too highas can be seen in this example.
This effect on the neutron log suppresses the
gas signature, which means the neutronporosity curve never consistently drops
below the density-porosity curve when the
logging tools pass a gas zone.

Fortunately, total porosity NMR logging


works well in these environments, simplifying the interpretation. Starting from the bottom of the interval, in the lower sand, Zone
C, the total CMR porosity TCMR agrees with
the density porosity. However, in Zone B at
the top of this shaly sand, the CMR-derived
porosity drops, crossing below the densityderived porosity. This is the NMR-density
logging crossovera gas signature. The
NMR porosity signal drops in the gas zone
due to the reduced concentration of hydrogen in the gas and long gas polarization
timeleading to incomplete polarization of
the gas. The logged density-derived porosity,
which assumed water-filled pores, reads
slightly high in the gas zoneaccentuating
the crossover effect. Since gas affects both
CMR porosity and density porosity, the
CMR-based gas signature works effectively
in shaly sands.

Oilfield Review

Zone
A
XX175

B
Gas

XX200

Hole Size
4

in.

14

Depth, CMR 3 msec Porosity 3


ft

BFV 3 msec
0

p.u.

p.u.

3000

40

Total CMR Porosity

Total BFV
0

Density Porosity

3000 0.3
T2 Distributions
msec

40

Gamma Ray
75
API 200

Maxis
Amplitudes

Neutron Porosity
50

p.u.

Reconstruction
Amplitudes

Detecting gas using total porosity logging with the PLATFORM EXPRESS tool. A dramatic
improvement in agreement between the CMR-200 total porosity (solid black), compared
to the 3-msec CMR porosity (dotted black), and density porosity (red), shown in track 2, is
obtained by including the fast-decaying shale-bound porosity components from the new
CMR-200 T2 distribution shown in track 4. This enhances the ability to use the CMR total
porosity and density-porosity crossover as a flag to detect gaspink-shaded area in track
2. Improvements in the signal processing are obvious when the CMR-200 total boundfluid log (solid black) is compared to the old 3 msec CMR bound-fluid log (dotted black)
shown in track 1.

Based on the petrophysical responses for


CMR total porosity and density porosity, a
new gas-corrected porosity gas-corr, shown
in track 2, and the volume of gas V gas ,
shown in track 1, are derived (see Gas-Corrected Porosity from Density-Porosity and
CMR Measurements, page 54).11 The new
gas-corrected porosity, computed from the
TCMR and density-porosity shown in track
2, gives a more accurate estimate of true

Summer 1997

porosities in the gas zones. NMR works


here, where the traditional neutron tool
doesnt, because NMR porosity responds
only to changes in hydrogen concentration,
and not to neutron absorption in the shales.
In the shale, Zone A, the total CMR and gascorrected porosities again agree with the
density-porosity curve, as expected.
Finally, in the lowest interval, Zone C,
below the gas sand, there is a transition into
poorer-quality sand with lower permeability

as evidenced by the short relaxation times


seen in the T2 distributions. This zone shows
little indication of gas because the total
CMR porosity, gas-corrected porosity and
density-porosity logs are in agreement.
Another striking example, from a British
Gas well in Trinidad, shows how the CMRderived total porosity was used with densityderived porosity to detect gas in shaly sands
(left). The interval contains shaly water
zones at the bottom, Zone C, followed by a
thin, 6-ft [2-m] clean sand, Zone B, topped
by a section of shale, Zone A. There is a gaswater contact at XX184 ft in the lower part
of the clean sand in Zone B. The CMR total
porosity curve, shown in track 2, overlies
the density-porosity curve throughout the
water-filled shaly sand intervals. In the clean
sand, which contains gas, there is a large
separation between the CMR total porosity
and the density-porosity curves. Again, the
reduction in the CMR porosity response is
due to the reduced concentration of hydrogen in the gas. The large crossover of these
two logs provides a clear flag for finding gas
in the reservoir.
The NMR logs shown in this example were
obtained with the early pulse-echo CMR
tool. Comparing the early tools 3-msec
effective porosity log with the new total
porosity log demonstrates the improvement
provided by the new CMR total porosity
algorithms. The total porosity and effective
porosity curves, shown in track 2, are similar
in the clean sand, but the 3-msec porosity
log misses the fast-decaying porosity in the
shale zones. Similarly, the bound-fluid
porosity log based on the early tools 3-msec
limited T2 distributions, shown in track 3, is
much noisier and misses most of the boundfluid porosity in the shale zones. The new
CMR T2 distributions in track 4 show large
contributions from the fast-decaying shale
with bound-fluid components between 0.3
and 3 msec. Like the previous example, the
new total porosity-derived bound-fluid log
now correlates well with the gamma ray, and
can be used as a improved shale indicator.
As commercial CMR tools are upgraded to
CMR-200 hardware, logging data and interpretation results, like those in this example,
will improve.
(continued on page 46)
10. Freedman R, Boyd A, Gubelin G, Morriss CE and
Flaum C: Measurement of Total NMR Porosity Adds
New Value to NMR Logging, Transactions of the
SPWLA 38th Annual Logging Symposium, Houston,
Texas, USA, June 15-18, 1997, paper 00.
11. Bob Freedman, personal communication, 1997.

43

Pulse-Echo NMR Measurements

A feature common to second-generation NMR logging


Precessional motion

tools is the use of an advanced pulse-echo spin flip-

Spinning motion

ping scheme designed to enhance the measurement.


This scheme, first developed nearly fifty years ago
for laboratory NMR measurements, works in the following way.1

The Source All hydrogen nuclei in water [H2O];


gas such as methane [CH4]; and oil [CnHm] are single, spinning, electrically charged hydrogen nuclei
protons. These spinning protons create magnetic
fields, much like currents in tiny electromagnets.

Proton AlignmentWhen a strong external magnetic fieldfrom the large permanent magnet in a
logging toolpasses through the formation with fluids containing these protons, the protons align along

Magnetic field

Gravitational field

Precessing protons. Hydrogen nucleiprotonsbehave like spinning bar magnets. Once disturbed from equilibrium, they precess about the static magnetic field (left) in the same way that a childs spinning top
precesses in the Earths gravitational field (right).

the polarizing field, much like tiny bar magnets or


magnetic compass needles. This process, called
polarization, increases exponentially in time with a
time constant, T1.2

Spin TippingA magnetic pulse from a radio frequency antenna rotates, or tips, the aligned protons
into a plane perpendicular to the polarization field.
The protons, now aligned with their spin axis lying in
a plane transverse to the polarization field, are similar to a spinning top tipped in a gravitational field,
and will start to precess around the direction of the
field. The now-tipped spinning protons in the fluid
will precess around the direction of the polarization
field produced by the permanent magnet in the logging tool (above).

The Effects of PrecessionThe precession frequency, or the resonance frequency, is called the
Larmor frequency and is proportional to the strength
of the polarization field. The precessing protons, still

acting like small magnets, sweep out oscillating

Unfortunately, it is not a perfect world. The polar-

magnetic fields just as many radio antennas trans-

ization field is not exactly uniform, and small varia-

mit electromagnetic fields. The logging tools have

tions in this polarization field will cause correspond-

receivers connected to the same antennas used to

ing variations in the Larmor precession frequency.

induce the spin-flipping pulses. The antennas and

This means that some protons will precess at differ-

receivers are tuned to the Larmor frequencyabout

ent rates than others. In terms of their precessional

2 MHz for the CMR tooland receive the tiny radio

motion, they become phase incoherent and will get

frequency signala few microvoltsfrom the pre-

out of step and point in different directions as they

cessing protons in the formation.

precess in the transverse plane (next page). As the

A Faint Signal from the FormationIn a perfect

protons collectively get out of step, their precessing

world, the spinning protons would continue to pre-

fields add together incoherently, and the resonance

cess around the direction of the external magnetic

signal decays at an apparent rate much faster than

field, until they encountered an interaction that would

the actual transverse relaxation rate due to the

change their spin orientation out of phase with others

dephasing process described above.3

in the transverse planea transverse relaxation process. The time constant for the transverse relaxation

Spin-Flipping Produces Pulse EchoesA clever


scheme is used to enhance the signal and to mea-

process is called T2, the transverse decay time. Mea-

sure the true transverse relaxation rate by reversing

suring the decay of the precessing transverse signal

the dephasing of the precessing protons to produce

is the heart of the NMR pulse-echo measurement.

44

Oilfield Review

Echo pulse amplitude

The T2 Resonance DecayThe pulse-echo technique used in todays logging tools is called the

6a

CPMG sequence, named after Carr, Purcell, Meiboom and Gill who refined the pulse-echo scheme.4

6b

It compensates for the fast decay caused by


2

reversible dephasing effects. However, each subsequent echo will have an amplitude
that is slightly smaller than the previous one

Pulsed
magnetic field

Free induction
decay

Time, sec

Spin echoes

CPMG Pulse Sequence

Irreversible Relaxation Decay RatesMeasuring

180 pulses

90 pulse
1
0.16

3
0.32

because of remaining irreversible transverse relaxation processes.

600

Start next
CPMG
sequence
Time, msec

0.32

echo amplitudes determines their transverse magnetization decay rate. The time constant T2 characterizes the transverse magnetization signal decay. The
decay comes from three sources;
intrinsic the intrinsic bulk relaxation
rate in the fluids,

2. Spins in
highest static
field precess
fastest

1. 90 pulse
starts spin
precession

4. 180
pulse
reverses

3. Spins
fan out

surface the surface relaxation rate,


a formation environmental effect
diffusion the diffusion-in-gradient
effect, which is a mix of environmental
and tool effects.
Bulk-fluid relaxation is caused primarily by the
natural spin-spin magnetic interactions between

5. Spins in
highest static
field precess
fastest

6a. Spins
return at the
same rate they
fanned out

or

6b. Some
spins dephased:
echo amplitude
reduced

neighboring protons. The relative motions of two


spins create a fluctuating magnetic field at one spin
due to the motion of the other. These fluctuating
magnetic fields cause relaxation. The interaction is

Pulse-echo sequence and refocusing. Each NMR measurement comprises a sequence of transverse magnetic
pulses transmitted by an antennacalled a CPMG pulse-echo sequence (middle). Each CPMG sequence starts
with a pulse that tips the protons 90 and is followed by several hundred pulses that refocus the protons by flipping
them 180. This creates a refocusing of the dephased spins into an echo. The reversible fast decay of each echo
the free induction decayis caused by variations in the static magnetic field (top). The irreversible decay of the
echoesas each echo peak decays relative to the previous oneis caused by molecular interactions and has a
characteristic time constant of T2transverse relaxation time. The circled numbers
correspond to steps numbered in the race analogy.
Imagine runners lined up at the start of a race (bottom). They are started by the 90 pulse (1). After several laps,
the runners are spread around the track (2, 3). Then the starter fires a second pulse of 180 (4, 5) and the runners
turn around and head in the other direction. The fastest runners have the farthest distance to travel and all of them
will arrive at the same time if they return at the same rate (6a). With any variation in speed, the runners arrive back
at slightly different times (6b). Like the example of runners, the process of
spin reversals is repeated hundreds of times during an NMR measurement. Each time the echo amplitude is less
and the decay rate gives T2 relaxation time.

what is called a spin echo. This is done applying a

fastest one last. Soon the slowest ones catch up with

180 spin flipping magnetic pulse a short timeat

the fastest, resulting in all spins precessing in phase

half the echo spacingafter the spins have been

again and producing a strong coherent magnetization

tipped into the transverse plane and have started to

signal (called an echo) in the receiver antenna. This

dephase. By flipping the spins 180, the protons con-

process, known as the pulse-echo technique, is

tinue to precess, in the same transverse plane as

repeated many times; typically 600 to 3000 echoes

before, but now the slowest one is in first and the

are received in the CMR tool.

Summer 1997

most effective when the fluctuation occurs at the Larmor frequency, 2 MHz for the CMR toola very slow
motion on the molecular time scale.
Molecular motions in water and light oils are
much more rapid, so the relaxation is very inefficient
with long decay times. As the liquids become more
viscous, the molecular motions are slower, and
therefore are closer to the Larmor frequency. Thus
viscous oils relax relatively efficiently with short T1
1. Hahn EL: Spin Echoes, Physical Review 80, no. 4 (1950):
580-594.
2. The time constant for this process, T1, is frequently known as
the spin-lattice decay time. The name comes from solid-state
NMR, where the crystal lattice gives up energy to the spinaligned system.
3. The observed fast decay due to the combined components of
irreversable transverse relaxation decay interactions and the
reversable dephasing effect is frequently called the free induction decay.
4. Carr HY and Purcell EM: Effects of Diffusion on Free
Precession in Nuclear Magnetic Resonance Experiments,
Physical Review 94, no. 3 (1954): 630-638.
Meiboom S and Gill D: Modified Spin-Echo Method for Measuring Nuclear Relaxation Times, The Review of
Scientific Instruments 29, no. 8 (1958): 688-691.

45

Large pore

Small pore
Rock grain

Amplitude

Amplitude

Rock

Rock

Hydrogen proton

Time, msec

Time, msec

Grain surface relaxation. Precessing protons diffuse about the pore space colliding with other protons and with
grain surfaces (left). Every time a proton collides with a grain surface there is a possibility of a relaxation interaction occurring. Grain surface relaxation is the most important process affecting relaxation times. Experiments show
that when the probability of colliding with a grain surface is highin small pores (center)relaxation is rapid and
when the probability of colliding with a grain surface is lowin large pores (right)relaxation is slower.

and T2 decay times (right). It should be noted that

10

for liquids with viscosity less than 1 cp, T2 does not

T1

T2 TE=0.2 msec
T2 TE=0.32 msec

change muchand even decreases for low viscos1

also enhanced by long echo spacing. The diffusionin-gradient mechanism does not affect T1.5
Fluids in contact with grain surfaces relax at a

diffusion coefficientlowest viscosity. This effect is

T2 TE=1 msec
0.1

T2 TE=2 msec

nism, which is strongest for liquids with the largest

T or T , sec

ity. This is due to the diffusion-in-gradient mecha-

0.01

much higher rate than their bulk rate. The surface


relaxation rate depends on the ability of the protons in
the fluids to make multiple interactions with the sur-

TE=Echo spacing
0.001
0.1

fluid will be relaxed through atomic-level electromagnetic field interactions. For the surface process to
dominate the overall relaxation decay, the protons in
the fluid must make many random diffusion (Brownian
motion) trips across the pores in the formation

10

100

1000

Viscosity, cp

face. For each encounter with the grain surface, there


is a high probability that the spinning proton in the

Relaxation time versus viscosity. The bulk relaxation of


crude oil can be estimated from its viscosity at reservoir
conditions. T2 values are shown for various echo spacings and have been computed for the CMR tool with a
20 gauss/cm magnetic field gradient. Diffusion-in-gradient effects, which depend on echo spacing, TE, dominate
T2 rates for low viscosity liquids.

(above). They collide with the grain surface many


times until a relaxation event occurs.
Finally, there is relaxation from diffusion in the
polarization magnetic field gradient. Because protons
move around in the fluid, the compensation by the

Gas, because of its high diffusion mobility, has a large


diffusion-in-gradient effect. This is used to differentiate gas from oil.6
The pulse-echo measurements are analyzed in

CPMG pulse-echo sequence is never complete. Some

terms of multiple exponentially decaying porosity

protons will drift into a different field strength during

components. The amplitude of each component is a

their motion between spin flipping pulses, and as a

measure of its volumetric contribution to porosity.

result they will not receive correct phase adjustment


for their previous polarization field environment. This
leads to a further, though not significantly large,
increase in the T2 relaxation rates for water and oil.

46

5. Kleinberg RL and Vinegar HJ: NMR Properties of Reservoir


Fluids, The Log Analyst 37, no. 6 (NovemberDecember, 1996): 20-32.
6. Akkurt R, Vinegar HJ, Tutunjian PN and Guillory AJ: NMR
Logging of Natural Gas Reservoirs, The Log Analyst 37, no. 6
(November-December, 1996): 33-42.

Total Porosity for Better Permeability


AnswersIn the North Sea, micaceous
sandstones challenge density-derived porosity interpretation and permeability analysis,
because grain densities are not well-known.
Here, CMR-derived total porosity provides a
much better match to core porosity than
conventional porosity logging tools (next
page, top). In addition, permeability data
derived from CMR measurements are of
considerable value. Other sources of measuring this critically important reservoir
parameter, such as coring and testing,
invoke high cost or high uncertainty.
Experience in these environments shows
that wellsite computations of CMR porosity
and permeability using the default parameters agree well with core data in at least
75% of wells. Typically, default parameters
assume a fluid-hydrogen index of unity (for
water) and the Timur-Coates equation with a
33-msec T2 cutoff is used for computing the
bound-fluid log.12 In most of these wells,
the CMR tool is now being used to replace
some of the coring, especially in frontier offshore drilling operations where coring can
cost up to $6000 per meter.
An offshore Gulf of Mexico gas well,
drilled with oil-base mud on the flank of a
large salt dome, provided an opportunity to
evaluate fluid contacts in an established oil
and gas field and determine hydrocarbon
productivity in low-resistivity zones. Previous deep wells on this flank encountered
drilling difficulties leading to poor-to-fair
boreholes and degraded openhole log quality. This resulted in ambiguous petrophysical analysis and reservoir characterization.
The combination of CMR measurements
and other logs provides a straightforward
description of the petrophysics in this well
(next page, bottom). These logs show many
high-resistivity zones in track 2 and densityneutron crossovers in track 3, which signal
these intervals as potentially gas-producing.
Total CMR porosity is low in the gas intervals. There are some low-resistivity intervals
that could produce free water. Of special
interest are the low-resistivity zones at the
base of the gas sand in Zone C, which may
indicate a water leg, and the low-resistivity
sand in Zone D. The CMR bound-fluid
porosity increases in these zones.
An ELAN Elemental Log Analysis interpretation, which combines resistivity and CMR
12. The hydrogen index is the volume fraction of fresh
water that would contain the same amount of hydrogen. The Timur-Coates equation is a popular formula
for computing permeability from NMR measurements.
Its implementation uses ratio of the free-fluid to
bound-fluid volumes. It was first introduced in Coates
G and Denoo S: The Producibility Answer Product,
The Technical Review, 29, no. 2 (1981): 54-63.

Oilfield Review

Permeability logging using CMR total


porosity in North Sea micaceous sandstones.
A good match (track 2), between core- and
CMR-derived permeability using the default
Timur-Coates equation, is common in many
North Sea reservoirs. This example is from an
oil zone drilled with oil base mud.

Gas
GR
0

Bound Fluid
Core Permeability

Density Porosity

Total CMR Porosity


0.3

150

API

Neutron Porosity

Depth, ft

p.u.

45

-15 1

CMR Permeability

T2 Distribution
msec 3000

CMR Free Fluid


p.u.

10,000 30

T2 Cutoff

XX450

XX500

XX550

XX600

Gas production in the Gulf of Mexico.


Bimodal CMR T2 distributions in track 4 show
effects of oil-base mud invasion with long T2
and large bound-fluid components below the
33-msec T2 cutoff. Neutron-density crossover
clearly shows gas zones. The low CMR signal
is due to the low hydrogen content and long
polarization time of the gas. The bound-fluid
measurement was used with the resistivity
data to determine movable water in the
lower resistivity zones.

Bound Water
Resistivity-10 in.
Resistivity-20 in.

Total CMR Porosity


Bound Fluid Volume

Resistivity-30 in.

Zones
0

Borehole
in.
Gamma Ray
API

T2 Distribution
msec 3000

Density Porosity

Resistivity-60 in.
16

0.3

Neutron Porosity
60

Resistivity-90 in.
Depth,
150
0.2
20
ohm-m
ft

p.u.

T2 Cutoff

Gas

XX700

Gas

XX800

XX900

Summer 1997

47

BFV

T2
Distribution

Neutron
Porosity
Borehole
6

in.

16 0.1

Gamma
Ray
Depth,
Zone
ft
0

API 150

0.3

Irreducible water

CMR
Permeability
Density
Porosity

Free water

Total CMR
Porosity

Sand

p.u.

Clay

md 1000

Water
Gas

50

3000

msec

Gas
T2 Cutoff
0

XX700

Gas

XX800

XX900

ELAN interpretation analysis. The CMR-resistivity interpretation suggests that there is no


movable water in the entire logged interval. The top three sand zones A, B and C, all with
high permeability determined by the CMR tool, track 2, produced 20 MMcf/D of gas with
780 B/D condensate and no water. The reserves indicated in the lower sand Zone E will
be completed at a later time.

data, shows that there is little free water in


this entire interval (left). Most of the water
contained in these zones appears irreducible because the CMR bound-fluid volume matches the total water volume
derived from the resistivity logs. The upper
three sands were completed and the CMR
total porosity, bound-fluid and permeability
logs allowed the operator to confidently
perforate Zones A, B and C.13 The initial
production from the upper three sand zones
was 20 MMcf/D, 780 B/D [124 m3/d] of
condensate with less than 1% water, confirming the CMR interpretation. The lower
sands, Zones D and E, have not been completed because of downdip oil production.
However, the interpretation results from
these zones have been included in the
overall well reserves analysis.
Another Gulf of Mexico example from an
infill development well in a faulted anticline
reservoir helped the operator determine if a
zone, believed to be safely updip from the
waterdrive in a mature low-resistivity oiland gas-producing zone, would produce
salt water or hydrocarbons. The CMR
porosities and T2 distributions have been
added to the standard density neutron
PLATFORM EXPRESS wellsite display (next page).
Several sandstone intervals, Zones A, B
and C, are easily identified by their longer
T 2 distributions that may correspond to
hydrocarbons isolated in water-wet rocks or
large water-filled pores. Zone C is more
conductiveindicating water, but the CMR
total porosity reads less than the densityporosityindicating gas. There are highresistivity sands in Zones A and B that are
difficult to interpret from raw logs alone. The
picture is confusing.
A petrophysical analysis, combining CMR
and PLATFORM EXPRESS data over this interval,
reveals the nature of the reservoir complexitieslithologic changes and the presence
of a waterdrive flood front (page 50, top).
This interpretationincluding the CMRderived permeability, verified by subsequent core analysisalong with irreducible
water analysis, shows that the upper resistive sand, Zone A, is productive and contains oil with little free water. The middle,
less resistive sand, Zone B, contains less oil
with a significant amount of potential producible water. The low-resistivity sand,
Zone C, appears to contain some oil, but
with a large amount of free water most
likely coming from the waterflood.
13. Whenever oil-base mud is used in water-wet rocks,
it is easy to identify bound versus free fluid. Boundfluid water has a short T2, and oil-base mud flushing
into the free-fluid pore space has a long T2. The
T2 cutoff is obvious, making free fluid and bound
fluid easy to distinguish for Timur-Coates permeability calculations.

48

Oilfield Review

The operator, hoping that the logs might


have been affected by deep invasion, completed and tested the lower sand, Zone C.
However, it produced only salt water, consistent with the petrophysical interpretation.
After a plug was set above this zone, the
upper sands were completed and produce
100 BOPD [16 m 3 /d] with only a 10%
water cut, which is also in agreement with
the expectations from the ELAN interpretation. It is clear that combining the porosity
discrimination and permeability information
from CMR measurements with other logs,
such as resistivity and nuclear porosity,
helps explain what is happening to the
waterdrive and hydrocarbons in pay zones
in this common, but complex, reservoir.

RXO

Hole Size
6

in.

16

API

Bound Water

Resistivity-10 in.

Gamma Ray

Total CMR Porosity

150

Resistivity-20 in.
Bound Fluid Volume
Resistivity-30 in.
Resistivity-60 in.

Depth,
0.2
Zones
ft

0.3

Neutron Porosity

Resistivity-90 in.
ohm-m

T2 Distribution
msec 3000

Density Porosity
20 60

p.u.

T2 Cutoff
0

Logging for Bound-Fluid

Bound-fluid logging, a special application


of NMR porosity logging, is based on the
ability of NMR to distinguish bound-fluid
porosity from free- or mobile-fluid porosity.
Bound-fluid porosity is difficult to measure
by conventional logging methods. A full
NMR measurement requires a long wait
time to polarize all components of the formation, and a long acquisition time to measure the longest relaxation times. However,
experience has shown that the T2 relaxation
time of the bound fluid is usually less than
33 msec in sandstone formations and less
than 100 msec in carbonate formations. In
fast bound-fluid NMR logging, it is possible
to use short wait times by accepting less
accuracy in measuring the longer T2 components. In addition, a short echo spacing
and an appropriate number of echoes
reduce the acquisition time and ensure that
the measurement volume does not change
significantly because of the faster tool
movement. This logging mode can acquire
NMR data at speeds up to 3600 ft/hr
[1100 m/hr] because of the short relaxation
times of the bound fluid.
The bound fluid volume can be used in
conjunction with other high speed logging
measurements to calculate two key NMR
answerspermeability and irreducible
water saturation, Swirr. Typically, the density
log is used in shaly sands, and the densityneutron crossplot porosity log is used in gas
sands, carbonates and complex lithologies.
The epithermal neutron porosity from the

Summer 1997

A
XX900

Wellsite PLATFORM EXPRESS display. This example shows long T2 components in the lowresistivity pay sand, Zone C. This appears to the CMR tool to contain mostly free water in
large pores, but could contain isolated hydrocarbons with long T2. The upper two zones
have high resistivity and potentially contain hydrocarbons. The total CMR porosity
matches the density porosity except in the lower sections of Zones A and C, due to incomplete polarization (with a 1.3 sec wait time) of the hydrocarbons.

49

Solving the mystery with ELAN interpretation. High permeability derived from the
CMR tool, track 2, with water saturation,
track 3, and irreducible water analysis,
track 4, show that the low-resistivity pay
sand in Zone C will produce only water,
probably from the flood drive. The upper
sand, Zone A, contains oil with only a little
free water. Zone C tested 100% salt water,
whereas the upper zone is producing
100 BOPD with a low water cut. Note the
excellent agreement between the CMR
permeability and core-derived permeability (black circles) in track 2.

Hole Size
6

in.

Oil
16

10

GR
0

API

Water
Saturation

Water
Permeability

150 1

10,000

md

p.u.

Moved Hydrocarbon

T2 Distribution

Irreducible Water

0.3 msec 3000

Free Water
Oil
Sand

T2 Cutoff

Dry Clay

Depth, ft
Zones

A
XX900

Bound-fluid logging with water-free production in a long oil-water transition zone.


CMR readings indicate water-free production above XX690 ft, where the water volume, Sw, computed from resistivity logs
matches the bound-fluid volume (BFV) from
the CMR tool. Even though bound-fluid logging speeds are fast, long T2 components
are seen in the CMR T2 distributions.

GR

Oil
Water
Sand

50

p.u.

Core
Permeability

150

Hole
3

Neutron Porosity Resistivity 90-in.

Core
Water Volume

T2 Distribution
3 msec 3000

Sw

13

0.2
0
in.
CMR
Density Porosity Resistivity 10-in.
CMR BFV
Permeability Depth,
45
-15 0.3
30 20
0
ft
10,000
0 1
p.u.
ohm-m
p.u.

T2 Cutoff

XX550

XX650

XX750

XX850

50

Oilfield Review

APT tool is especially accurate because it is


immune to the thermal neutron absorber
effects frequently found in shales.14
Bound-fluid logging can also be used to
detect heavy oil, since many high-viscosity
oils have T2 components below the 33-msec
or 100-msec cutoffs and above the 0.3-msec
detection limit of the CMR-200 tool. These
oils will be properly measured in a boundfluid log.
Our first example of bound-fluid logging
comes from the North Sea, where long transition zones and zones of low-resistivity pay
often lead to uncertainty about the mobile
fluid being oil or water, or both (previous
page, bottom). One method of predicting
mobile water is to compare the bound-fluid
volume, BFV, seen by the CMR tool with
Swthe volume of water computed from
the resistivity measurement. If the total
water from the Sw calculation exceeds the
BFV measurement, then the excess water
will be free fluid and therefore producible.15
In this example, the CMR bound-fluid tool
was run with the standard triple combo
neutron and density for total porosity and an
AIT Array Induction Imager Tool for saturationat 1500 ft/hr [460 m/hr]. The fast-logging CMR tool was run with a short wait
time of 0.4 sec and 600 echoes. This well
was drilled with an oil-base mud, which
created a large signal at long T2 times due to
invasion of oil filtrate. To fully capture these
long T2 components for a total CMR porosity measurement, wait times of 6 sec would
have been required, at a correspondingly
slower logging speed of 145 ft/hr [44 m/hr].
An interpretation based on density, neutron and resistivity logs yields the lithology
and fluid content. The T2 distributions show
significant oil-filtrate signal seen at late
times despite the short wait time. The lithology analysis shows little or no clay, consistent with the lack of T2 data below 3 msec.
In this well, a 100-msec cutoff was used to
differentiate the bound-fluid volume from the
free fluids; this is verified by excellent agreement with core permeability using the default
Timur-Coates equation. Ideally, the T2 cutoff
should be confirmed independently, for
example by careful sampling with the MDT
Modular Formation Dynamics Tester tool
around the oil-water transition depths.
In another example, a real-time wellsite
quicklook display was developed for combining CMR bound-fluid logging with the
PLATFORM EXPRESS acquisition at 1800 ft/hr
[550 m/hr] in the Gulf of Mexico. In offshore
wells, with hourly operating expenses that
far exceed logging costs, fast logging and
immediate answers are critical for timely

Summer 1997

Neutron Density
Permeability

Gamma Ray
0.0

API

150.0

Hole Size
6.0

in.

Density
Porosity

Density
Permeability
md

16.0 1

10,000

Resistitivity 90-in.

Depth,
ft

CMR BFV

RXO
0.1

ohm-m

Neutron
Porosity

1000 50 p.u. 0

XX100

Porosity
Analysis
50 p.u. 0
Clay
Bound

T2 Distribution
msec 3000

Hydrocarbon
Free
Water
Irreducible
Water

T2 Cutoff

Zone

B
XX200

PLATFORM EXPRESS-CMR wellsite quicklook. The field interpretation results from high-speed
CMR logging enable the operator to quickly identify that hydrocarbon pay Zones A and
B, have high permeability, track 2, and high potential for water-free production because
all the water in the interval, as shown in track 4, is either clay-bound or irreducible. The
wellsite interpretation results also compared well with core analysis obtained later.

decision-making at the wellsite. The field


quicklook incorporates the CMR measurements into a Dual-Water saturation computation. The results are total porosity from the
neutron-density and water saturation from
the AIT induction tools (above). The wellsite
permeability computation, track 2, uses the
CMR bound-fluid results in a Timur-Coates
analysis with two estimates of total porosityone from the traditional neutron-density
crossplot porosity, and the other from a density-derived porosity.16
The first porosity estimate is more accurate
in the shaly sections but requires free- and
bound-water resistivity and wet-clay porosity
parameters and is limited by the vertical resolution of the neutron measurement. The second porosity estimate has the advantages of
not requiring any parameter picks for the
shaly intervals and also has the higher verti-

cal resolution inherent in the density measurement. The field-derived results show both
sand Zones, A and B, with good production
potential and with low water cut. These logging results agree well with the sidewall core
permeabilities and porosities measured later,
shown as circles in tracks 2 and 4.
Repeatability of Fast BFV LoggingThis
example shows that accurate and repeatable
bound-fluid volumes and permeabilities can
14. Scott HD, Thornton JL, Olesen J-R, Hertzog RC,
McKeon DC, DasGupta T and Albertin IJ: Response
of a Multidetector Pulsed Neutron Porosity Tool,
Transactions of the SPWLA 35th Annual Logging
Symposium, Tulsa, Oklahoma, USA, June 19-22,
1994, paper J.
15. The BFV log measures the capillary-bound and nonmobile water.
16. LaVigne J, Herron M and Hertzog R: Density-Neutron Interpretation in Shaly Sands, Transactions of
the SPWLA 35th Annual Logging Symposium, Tulsa,
Oklahoma, USA, June 19-22, 1994, paper EEE.

51

Gamma
Ray
25

API

CMRP
3400 ft/hr

175

CMRP
1800 ft/hr

BFV
3400 ft/hr
BFV
1800 ft/hr Zones

Density Porosity
Depth, 50
0
25
ft
p.u.
p.u.

1800 ft/hr

T2 Distribution T2 Distribution T2 Distribution


0.3

BFV
300 ft/hr
0

CMRP
300 ft/hr

Permeability
3400 ft/hr

3000 0.3

3400 ft/hr

3000 0.3

1800 ft/hr

3000

300 ft/hr

300 ft/hr
0.1

md

1000

A
XX000

default 33-msec T 2 cutoff and five-level


stacking, are shown in track 1. They agree
well with each other, having a root-meansquare error on the bound-fluid volumes
about of 1.2 p.u., which is comparable to
the typical statistical error found in other
non-NMR porosity logs. The BFV curves
correlate well with the gamma ray except in
Zones A and B, with the shortest T2 values,
where the relaxation may be too rapid to be
seen by the CMR tool.
Permeability estimates, in track 6, were
based on the standard Timur-Coates equation. CMR bound-fluid logs and a robust total
porosity estimator based on the minimum of
the density and neutron-density porosities
were used to compute the free-fluid and the
bound-fluid volumes used in this permeability equation.17 At these logging speeds, all
three permeability estimates overlie one
another, typically within a factor of 2.
Integrated Answers

XX050

XX100

Repeatability of bound-fluid logging. Logs and T2 distributions from three runs at 300,
1800 and 3400 ft/hr in the same well show how well the bound-fluid volumes, track 1,
agree even at fast logging speeds. The permeability results, track 6, from CMR boundfluid logging overlie even at the highest logging speeds. The T2 distributions are similar,
though the peaks in the sands seem to change somewhat, because there are not enough
echoes in the fast logging measurement to completely characterize the slower relaxations. The longest T2 components disappear in the fast-logging mode.

be measured at high logging speeds


(above). Three runs were made through a
series of sand-shale zones, where fresh
water makes producibility evaluation difficult and high-precision bound-fluid determination is important. The three CMR
passes were made with 0.2-sec wait time,

52

200 echoes at 3400 ft/hr [1040 m/hr]; 0.3


sec wait time, 600 echoes at 1800 ft/hr
[549 m/hr]; and finally at 2.6 sec wait time,
1200 echoes at 300 ft/hr [92 m/hr]. The data
were processed with total CMR porosity to
give T2 distributions from 0.3 msec to 3 sec,
shown in tracks 3, 4 and 5 for each run. The
three bound-fluid BFV logs, computed with

Bound-fluid logging is one example of how


CMR measurements combine with other
logging measurements to provide a simple,
more accurate picture of the formation in a
fast, more efficient manner. The CMR tool
can be combined with the MDT tool to give
better producibility answers. The in-situ,
dynamic MDT measurements complement
the CMR continuous permeability log, and
help verify the presence of producible
hydrocarbons. Wellsite efficiency is substantially increased when MDT sampling depths
are guided by CMR results.
Reservoirs often exhibit large vertical heterogeneity due to the variability of sedimentary deposition, such as in laminated
formations. In vertical wells, formation
properties can change over distances small
compared to the intrinsic vertical resolution
of logging tools. In horizontal wells, the
drainhole can be located near a bed
boundary with different formation properties above and below the logging tools. It is
important that all logging sensors face the
same formation volume.
Two logging measurements that investigate
the same rock and fluid volume in a formation are said to be coherent. Conventional
log interpretation methods can be limited by
a lack of volumetric coherence between
logs. For example, when subtracting the
bound-fluid porosity response of an NMR
tool from the total porosity response of a
nuclear tool to compute a free-fluid index,
care must be taken to ensure that the
17. Singer JM, Johnson L, Kleinberg RL and Flaum C:
Fast NMR Logging for Bound-Fluid and Permeability, Transactions of the SPWLA 38th Annual Logging
Symposium, Houston, Texas, USA, June 15-18,
1997, paper PP.

Oilfield Review

nuclear tool does not sample a different volume than the NMR tool.18
An interesting example of combining CMR
logging with other coherent measurements,
is the method of determining formation silt
volume developed with Agip.19 Silt is an
important textural component in clastic
rocks because it relates to the dynamic conditions of transportation and deposition of
sediments. The quality of a reservoir is determined by the amount of silt present in the
rock formation. Fine silt drastically
decreases permeability and the ability of a
reservoir to produce hydrocarbons. The silt
can be of any lithology; and, since it is
related only to grain size, its volume and
deposition properties can be determined
with the help of CMR logging, in combination with EPT measurements.
Dielectric propagation time and attenuation increase with silt volume. These two
logs, coupled with the APT epithermal neutron porosity, provide a porosity interpretation unaffected by the large thermal
absorbers typically associated with silts and
shales in formations, and are therefore
appropriate in determining silt volume in
complex lithologies.
Accuracy in determining silt volume, as
well as a host of other parameterspermeability, fluid volumes and mobility
depends upon a high coherence between
logging measurements.
The CMR measurements ability to characterize grain size, using the T2 distributions,
coupled with other complimentary logs,
was successfully used to understand a complex reservoir sequence of thinly laminated
sands, silt and shales in the presence of
water and gas (right). The well was drilled
with fresh water-base mud. The EPT dielectric attenuation and propagation time
curves, showing high bed resolution in track
1, cross each otherclearly identifying the
sequence of silty sands and shales. Perme18. An example of an incoherent result would be the
case in which a nuclear tool is sampling a few
inches of formation, and an NMR tool is sampling
several feet of formation, or vice versa. Then the
net free fluid could be distorted and incorrectly
computed by large porosity changes within the
sample volume caused by shale laminations in the
formation. If the nuclear and NMR tools are sampling the same volume of rock, then the combined
results will always be correct because the different
measurements will be volume matched.
19. Gossenberg P, Galli G, Andreani M and Klopf W: A
New Petrophysical Interpretation Model for Clastic
Rocks Based on NMR, Epithermal Neutron and Electromagnetic Logs, Transactions of the SPWLA 37th
Annual Logging Symposium, New Orleans,
Louisiana, USA, June 16-19, 1996, paper M.
Gossenberg P, Casu PA, Andreani M and Klopf W:
A Complete Fluid Analysis Method Using Nuclear
Magnetic Resonance in Wells Drilled with Oil Based
Mud, Transactions of the Offshore Mediterranean
Conference, Milan, Italy, March 19-21, 1997,
paper 993.

Summer 1997

Clay BoundWater
Capillary Bound Water

Reservoir

100
20

RFT-Permeability

EPT-EATT
CMR-Permeability
600 0.2
md 20,000
db/m
EPT-TPL
nsec/m 10

Resistivity-90 in.

0.2

Zone

Depth,
m

MicroSFL
ohm-m 20,000

Clay

Gas

Silt Non Reservoir

Free Fluid

Silt Reservoir

BFV

Sand

CMR Free Fluid


3-msec Porosity

Total Hydrocarbon
Free Water
Irreducible Water

Neutron Porosity

50

Density porosity
p.u.
0

XX50

XX100

G
XX150

Avoiding water production in thinly layered gas sands with CMR data combined with
other coherent logging measurements. There are nearly 20 gas pay-sands showing in this
interval, all showing similar log profile characteristicsseparation of the EPT dielectric
propagation time TPL, and attenuation EATT, logs in track 1 and the free-fluid volumes
from CMR and epithermal neutron porosity, in track 4. The interpretation results identify
three Zones (A, C and the lower part of Zone F) that contain free water in the reservoirs.
The CMR tool responds well to thin beds (Zones B, D and G).

53

Gas-Corrected Porosity from Density-Porosity and


CMR Measurements

In zones containing unflushed gas near the borehole,


total CMR porosity logTCMRunderestimates

(HI)g is the hydrogen indexthe volume fraction


of fresh water that would contain the same amount of

total porosity because of two effects: the low hydro-

hydrogenof gas at reservoir conditions, and (HI)f

gen concentration in the gas and insufficient polar-

is the hydrogen index of fluid in the flushed zone at

ization of the gas due to its long T1 relaxation time.

reservoir conditions. Sg is the flushed zone gas satu-

On the other hand, in the presence of gas, the den-

ration.

sity-derived porosity log DPHI, which is usually


based on the fluid being water, overestimates the

W
Pg = 1 exp( )
T1,g

accounts for the polar-

total porosity because the low density of the gas

ization of the gas, where W is wait time of the CMR

reduces the measured formation bulk density. Thus,

tool pulse sequence, and T1,g is the longitudinal relax-

gas zones with unflushed gas near the wellbore can

ation time of the gas at reservoir conditions.

be identified by the deficit or separation between


DPHI and TCMR logsthe NMR gas signature. The
method of identifying gas from the DPHI/TCMR
deficit does not require that the gas phase be polarized.1
The advantages of this method of detecting gas

ma

density is eliminated by introducing densityderived


ma B
.
porosity, DPHI =
ma f
Solving these equations for the gas-corrected
porosity, gas-corr, one gets:

include:
faster logging in many environments since the gas
does not have to be

To simplify the algebra, a new parameter,


f g
, is introduced, and formation bulk

polarized2

more robust gas evaluation since the deficit in


porosity is often much greater than a direct

(HI)g* Pg *TCMR
DPHI*(1 ) +
(HI)f
(HI)f
gas-corr =
,
(HI)g * Pg
1 +
(HI)f

gas signal
total porosity corrected for gas effects.
As an aid to interpreting gas-sand formations, the
gas-corrected porosity, gas-corr, and gas bulk-volume, Vgas, equations are derived from a petrophysical model for the formation bulk density and total
CMR porosity responses.
The mixing law for the density log response is:
b = ma (1-) + f (1-Sg ) + g Sg
and for the total CMR porosity response:

TCMR = Sg (HI)g Pg + (1-Sg )(HI)f .


In these equations, b is the log measured bulkformation density, ma is the matrix density of the
formation, f is the density of liquid phase in the
flushed zone at reservoir conditions, g is the density
of gas at reservoir conditions, and is the total formation porosity.

54

and for the volume of gas, one gets:

TCMR
DPHI
(HI)f
Vgas = .
(HI)g*Pg
1 +
(HI)f
The gas saturation can be obtained from these
equations by simply dividing the latter by the former.
Note that the gas-corrected porosity, gas-corr, is the
NMR analog of the neutron/density log crossplot
porosity, and is always less than DPHI and greater
than TCMR/(HI)f.
Bob Freedman
1. To be able to attribute this deficit to gas, the wait time for the
logging sequence must be sufficiently long to polarize all the
liquids, including formation water and mud filtrate.
2. Oilbase muds are the exception and require long wait times
due to the long T1 relaxation time of oil filtrate.

ability curves were computed from the CMR


bound-fluid log using both the Timur-Coates
equation and the Kenyon equation with the
logarithmic mean of T2.20 Both give reasonable agreement with permeability results
derived from core and the RFT Repeat Formation Tester tool.
The dry sand, silt and clay volume interpretation model, shown in track 3, includes
clay-bound water, capillary-bound water (or
irreducible water), and movable water (next
page, top). By subtracting the irreducible
water measured directly by the CMR boundfluid log from total volume of water, Sw,
computed from Rt after clay corrections, all
free water zones within the reservoirs are
clearly determined. For example, Zone F
cannot be perforated without risk of large
water production. Apart from Zones A and
F, most of the other reservoirs show dry gas.
The epithermal neutron and density porosity
logs are shown along with NMR logs in
track 4. There are intervals with clear examples of gas crossover between the densityneutron porosity curves. Also, the separation
between the APT neutron porosity and CMR
porosity provides a good estimate of claywater volume.
The logs show excellent correlation
between the EPT and CMR curves. The
sequence of fining-up grain size in the sands
through Zone F is displayed on the logs as
an increasing silt index. This is confirmed by
increasing attenuation and propagation
times on the EPT logsbecause of increasing conductivity in the silt, and an increase
in bound-water porosity in the CMR logs
implying a decrease of movable fluid. The
profile on other pay reservoirs between both
tools shows similar characteristics, especially
in the EPT curve separation and the free-fluid
volumes from the CMR measurements.
The CMR measurements complement other
coherent logging measurements, such as
dielectric, Litho-Density, microresistivity,
neutron capture cross sections and epithermal neutron porosity, in providing a critical
evaluation and complete interpretation in
these complex formation environments. The
Litho-Density tool is needed for lithologyto
determine the T2 cutoff for CMR bound-fluid
20. Kenyon WE, Day PI, Straley C and Willemsen JF: A
Three-Part Study of NMR Longitudinal Relaxation
Properties of Water-Saturated Sandstones, SPE Formation Evaluation 3 (1988): 622-636.
21. Freedman et al, 1997, reference 10.
22. Bass C and Lappin-Scott H: The Bad Guys and the
Good Guys in Petroleum Microbiology, Oilfield
Review 9, no. 1 (Spring 1997): 17-25.

Oilfield Review

loggingand the salinity independent CMR


water volume is used to correct the dielectric
and resistivity logsused for fluid saturation.

30

100% Clay line


Increasing
clay

Clay
trend

20

BFV, p.u.

EPT-EATT, db/m

600

400

10

200

Clean sand line


0

0
5

13

17

21

25

10

EPT-TPL, nsec/m

20

CMR, p.u.

30

= APT - CMR

Silt volume interpretation using EPT, APT and CMR logs. The data
in the crossplots come from a mixture of sands, mica and feldspar.
EPT attenuation, EATT, and propagation time, TPL, are crossplotted
(left) along with the difference between APT epithermal neutron
porosity and CMR 3-msec effective porosity, shown as a Z-axis discriminator. There are two clusters of points. One is for the silty sand
where gas is present (lower cluster). There is a departure from the
sandstone line when silt increases. The second cluster is in several
nonpay reservoirs where clay minerals, associated with fine silt,
form the shales. The APT-CMR effective porosity difference
increases in proportion to the clay content due to the clay-bound
water. Since the gas content is properly controlled on the EPT crossplot, the dielectric data are used to apply accurate gas corrections
on the CMR porosity logs. On the CMR stand-alone crossplot (right),
the silt index increases with the bound-fluid values. The scatter in
the CMR data is a gas effect induced by low polarization and variation in hydrogen index. The APT-CMR difference porosity is used
again for Z-axis discrimination and the two clusters are identified.
Fine silt in the shales follows the 45 clay trend line downwards to
the clay endpoint at the origin. Except for the gas scatter on CMR
porosity, the crossplot confirms the high coherence existing
between the three logs.

Depth,
ft

Neutron Porosity Resistivity 10-in.

Zone

Total
Density Porosity Resistivity 20-in. CMR Porosity

GR
0

45

150

API

p.u. -15

Resistivity 90-in. CMR Free Fluid


1

ohm-m 100 20

Oil

A
B

Water

XX900

Tar

D
Oil
E
F

Water
XX1000

Summer 1997

Bound Fluid

p.u.

T2 Distribution
3 msec 3000

T2 Cutoff

Tar zones
detected by the
CMR tool. In
water, gas or oil,
the CMR tool has
a clear tar signature as seen in
Zone Ca suppression of the
long T2 components (track 5)
and a reduced
total porosity
(track 4). In this
well the CMR tool
is able to confirmby the presence of large T2
contributions from
oil and no reduction in total CMR
porositythat the
lower oil zone
(Zone E) is not tar,
but mobile oil,
which could be
trapped by a
local stratigraphic
closure.

Identification of Oil Viscosity and Tar


DepositsA trend to produce heavy-oil from
shallow reservoirs is creating the need to
detect high viscosity oil and tar deposits. Frequently, due to the shallow depths and the
complex filling history of the structures, reservoirs were charged, then later breached and
recharged. This results in significant changes
in the properties of the hydrocarbons
throughout the reservoir sections. One way to
distinguish and identify these oils is to measure their viscosity in situ using bulk relaxation times from the CMR data. The bulk
relaxation rates of different oils can be measured with reasonable accuracy, between 2
and 10,000 cp, using the CMR tool.21
Field examples show that CMR logderived viscosities match drillstem testderived values. It is important to determine
which part of the T2 signal distribution is
from the oil bulk relaxation decay. In clean,
coarse sands containing high-viscosity oil,
this is a straightforward process, because the
oil, having a fast decay, is isolated from the
water-wet rock having a slower decay, and
surface relaxation has little effect on the fast
bulk oil decay. For low-viscosity oils in finegrained rocks, the bulk oil relaxation rates
are much slower and can easily be separated from the water-saturated rock signal.
In some settings, over geological time, oils
have come into contact with water-borne
bacteria that produce impermeable seals
tar deposits at oil-water contacts.22 These
reservoirs may receive a further oil charge,
which encases a tarry zone inside the
hydrocarbon zone. In such a position, the
tar zone could have a dramatic effect on the
management of the field during production.
The tar, surrounded by other hydrocarbons, appears similar to the surrounding fluids when logged with non-NMR techniques.
Even when cored intervals are available, the
tar may be overlooked if the core samples
are cleaned, as they traditionally are, before
analysis. However, the CMR signature in a
tar zone is quite clear and distinct from that
of more mobile hydrocarbons or water. It
has been suggested that long hydrocarbon
chains within the tar cause it to behave
almost like a soliddramatically shortening
the T 2 response. Tar is detected by the
reduction in NMR porosity.
An example from the North Sea shows the
clear tar detection signaturemissing porosity and short T2 decays (left). The well was
drilled with an oil-base mud through several
oil and water zones identified in track 2.

55

When the tar is in a conductive water zone,


as it is in this example, it can be easily identified by resistivity measurements. However,
tar is frequently located in hydrocarbon
zones, where there is no resistivity contrast
between tar and nontar hydrocarbon intervals. In such cases, NMR is the only routine
logging measurement that can identify the tar.
Like tar, heavy oil has a short T2 relaxation
decay time, so detecting heavy oil is similar
to detecting tar, except that the heavy-oil
relaxation times are sufficiently long to be
captured by the total porosity response of
the CMR-200 tool. Essentially, the heavy-oil
T2 contribution shifts the signal below the
3-msec T2 cutoff, but not below the 0.3-msec
detection limit of the CMR-200 tool (right).
This example is from a heavy-oil reservoir
in California. The low-gravity (12 to 16API)
oil is produced by a steam-soak injection
method. The logs show that all porosity
logstotal porosity, 3-msec effective porosity, density and neutron porosityagree in
the lower wet Zone B. The CMR porosities
agree in the wet zone because the formation
does not have T2 values below 3-msec associated with clay-bound porosity or microporosity. In the upper oil-bearing Zone A,
total CMR-porosity reads about 5 p.u. higher
than the 3-msec porosity over most of this
interval, because the heavy oil has considerable fast relaxing components below the
3-msec T 2 sensitivity limit of the early
3-msec porosity measurement. The different
responses of the two CMR porosity measurements to the heavy oil provide a clear delineation of the oil zone and show the oilwater contact between Zones A and B.
Logging Saves DollarsCombining geochemical logs with CMR-derived permeability and free-fluid data, a major oil company
was able to select only productive zones for
stimulationsaving $150,000 in a South
Texas well.23 The objective was to complete,
including fracture simulation, three highresistivity potential pay zones at the bottom
of the well. Total porosity from CMR is
approximately 10 to 15 p.u. in all zones,
where free fluid varies from 1 to 10 p.u.
However, spectral data from the ECS Elemental Capture Spectrometer sonde indicate significant differences in clay and car-

56

Neutron
Porosity

Depth, ft

0.3

Resistivity-10 in.

T2 Distribution
msec 3000

Density
Porosity

Caliper

Resistivity-60 in.
TCMR
in. 16
Gamma Ray Resistivity-90 in. 3-msec CMRP

Zones

API

150 2 ohm-m 200 50

p.u.

3 msec

XX310

XX330

Oil-water
contact

B
XX350

How to see a heavy-oil water contact with total CMR porosity. The
oil-bearing Zone A and the oil-water contact between Zones A and
B are clearly delineated by the separation between the total CMR
porosity log, TCMR, and the 3-msec porosity log, CMRP, shown in
track 3. The oil-water contact is confirmed by the resistivity logs in
track 2. The large heavy oil contribution to the total porosity can be
seen by short relaxation decay components above the oil-water
contact in the T2 distribution in track 4.

bonate cement volumes between zones


(next page). 24 The two Zones, A and C,
exhibit lower clay and higher carbonate
calcite cementcontent than Zone B.
Resistivity measurements and CMR-derived
bound fluid and permeability correlate
with the lithology changes. Increased clay
volumes in Zone B represent shales that
are clay-richsuggesting a lower depositional rate.
These important lithological differences
have significant influence on the electrical,
mechanical and fluid flow properties of the
reservoir intervals. Bound fluid and free flu-

ids, measured by the CMR tool, vary widely


for any given porosity, and have a large
impact on the permeability.
Based on the logging results, Zone C was
perforated and fracture stimulated and is
currently producing gas at a rate of
6 MMcf/D with a very low water cut of
10 BWPD. The pessimistic estimates of free
fluid volumes and permeability from the
23. Horkowitz JP and Cannon DE: Complex Reservoir
Evaluation in Open and Cased Wells, Transactions
of the SPWLA 38th Annual Logging Symposium,
Houston, Texas, USA, June 15-18, 1997, paper W.
24. Herron SL and Herron MM: Quantitative Lithology:
An Application for Open- and Cased-Hole Spectroscopy, Transactions of the SPWLA 37th Annual
Logging Symposium, New Orleans, Louisiana, USA,
June 16-19, 1996, paper E.

Oilfield Review

Irreducible Water
Producible Water

10-in. Resistivity
Free Fluid
Bound Fluid

0.3

T2 Distribution
msec 3000

Gas
Carbonate

90-in. Resistivity
0.1

ohm-m

Sand

1000

Depth, Total CMR Porosity


CMR Permeability
ft
25
0 0.01
100
md
p.u.

Clay
T2 Cutoff

CMR ECS Analysis


0
1 Zone
v/v

A
XX900

XX000

XX100

C
XX200

CMR combined with geochemical logs from the ECS tool. Decreased free fluid (track 1)
and permeabilities from CMR logs (track 2) and hydrocarbons from an ELAN interpretation (track 4) show the middle interval (Zone B) was not worth the expensive fracture job
planned by the operator. The low permeability is due to clay and carbonates seen in this
interval (track 4). Two other Zones (A and C) in the same well, were identified with lower
clay, as seen by the geochemical logs, and CMR permeabilities are ten times better in
these intervals. Zone C was completed and is producing gas at a rate of 6 MMcf/D and
Zone A is expected to produce in excess of 10 MMcf/D.

CMR tool, confirmed by the increased clay


content shown by the geochemical logs,
convinced the client to cancel the expensive
fracture stimulation and abandon the large
middle zone, Zone B. The upper zone, Zone
A, is currently being perforated and a fracture stimulation is planned based on the
CMR and ELAN interpretation results, with
an expected gas production rate in excess of
10 MMcf/D.

Summer 1997

The Outlook for the Future

This is an exciting time for petrophysicists,


reservoir engineers and geologists who use
well logging measurements. Recent
improvements in NMR logging tool technology have dramatically increased the range of
relaxation rates that can be measured in the
borehole. Today, we can finally use these
tools to differentiate bound and free fluids in
the formation, determine permeability, and
log faster for bound fluids. We have seen
many examples of how to use these measurements to improve our understanding of
complex shaly sand formations. The CMR
and MRIL tools are commercial and their use
is already routine in many parts of the world.
However, there is much more to learn.

More work needs to be done to understand how to use T2 distributions effectively


in carbonate formations, with their
extremely wide variations in porosity scale
and structure. Our interpretation of NMR
responses in shaly sand reveals bound and
free fluids. In the future, we must extend the
interpretation so that the measured relaxation decay times and the T2 distributions in
carbonates also reflect the mobility and
dynamics of reservoir fluids in these far
more complex pore structures.

RCH

57

You might also like