Final Dissertation Introduction To Marine Seismics PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 66

2D Offshore Seismic Data Processing 2013

CHAPTER 1
INTRODUCTION TO MARINE SEISMICS

In seismic surveying, sound waves are mechanically generated and sent into the earth (figure-1)
some of this energy is reflected back to recording sensors, sensors are measuring devices that
record accurately the strength of this energy and the time sound waves has taken for this energy
to travel through the various layers in the earth’s crust and back to the location of the sensors.
These recordings are then taken and, using specialised seismic data processing, are transformed
into visual image of subsurface of earth in the seismic survey area. Just as doctors use x-ray to
“see” into the human body indirectly, geoscientist uses seismic surveying to obtain a picture of the
structure and nature of the rock layers indirectly

The seismic method plays an important role in the search of hydrocarbons. Seismic exploration
consists of three stages :

• Data acquisition
• Data Processing
• Data Interpretation
BANARAS HINDU UNIVERSITY Page 1
2D Offshore Seismic Data Processing 2013

The seismic method has three principal applications:

• Delineation of near-surface geology for engineering purposes and mineral exploration.


• Hydrocarbon exploration.
• Investigation of earth’s crustal structure.

Seismic surveys are usually done by either refraction or reflection methods. The reflection seismic
method has been used to delineate near-surface geology for the purpose of hydrocarbon and
mineral exploration and engineering studies.

The seismic method utilizes the propagation of waves through the interior of earth in order to
map subsurface interfaces of geological interest. The general scheme of seismic prospecting is to
generate seismic waves using a source near the earth’s surface. The operative physical properties
in seismic prospecting are density and elastic moduli, which determine the propagation velocity of
the seismic waves.

The seismic reflection method is based on the study of arrival times of waves which travel from
surface down to different layers and are then reflected back to surface. These waves are recorded
at the surface by the help of sensors called geophones or hydrophones. From the time of arrival of
the reflected wave, its pulse shape, reflection pattern and velocity in the earth medium, we can
get information about the structure, stratigraphy and the depth of the target horizon.

Types of marine survey


All seismic surveys involve a source and some configuration of receiver or sensors. Surveys may be
differentiated on the basis of

• The geometry of the receiver system;


• The density of measurements made over a given area;
• The type of sensor used

BANARAS HINDU UNIVERSITY Page 2


2D Offshore Seismic Data Processing 2013

Figure 1.1 illustrates the different receiver geometries in marine surveying, while figure 1.2
provides a list of the different types of surveys. Towed streamer operation represents the most
significant commercial activity, followed by ocean bottom seismic (including array placed on the
seafloor and array buried a meter or so below the sea floor). Shallow water/transition zone
seismic is a complex seismic operation as it is undertaken in shallow water areas such as tidal
zones, river estuaries, marshes and swamplands. Vertical seismic profiling is an additional category
of seismic survey where the receivers are placed in one or more well holes and a source is hung off
the well platform, or deployed using a source vessel.

Seismic surveys may also be differentiated by the density of the measurements made over a given
area; 3D surveys have a much denser number of measurements than 2D surveys. There are also
surveys that are acquired repeatedly over the same area, the duration between surveys being on
the order of months or year. These are known as 4D surveys or time-lapse surveys, and hence the
data density is higher over the same area, over a period of time because there are multiple data
points over the same location. In general, 4D data density per unit area is higher than 3D, which in
turn is higher than 2D.

Finally, surveys can be differentiated by the type of sensor that is used. In most marine work, the
sensor is hydrophone that detects the pressure fluctuations in the water caused by the reflected
sound waves. The cable containing the hydrophones, called streamer, is towed or ‘streamed’
behind a moving vessel. These streamers are typically 3 to 8 kilometres long, although they can be
up to 12 kilometres long depending on the depth of geophysical target being investigated. in
ocean bottom surveys, typically the receiver system will have a hydrophone and a 3-component
geophone at each receiver station and the data are processed either as 2-component data, or 4-
component data.

BANARAS HINDU UNIVERSITY Page 3


2D Offshore Seismic Data Processing 2013

CHAPTER 2
PHENOMENA & TERMINOLOGIES IN MARINE SEISMIC
Marine acquisition of seismic reflection data is generally accomplished using large ships with
multiple airgun arrays for sources. Air-guns are deployed behind the seismic vessel and generate a
seismic signal by forcing highly pressurized air into the waterReceivers are towed behind the ship
in long streamers that are several kilometers in lengthMarine receivers are composed of
piezoelectric hydrophones, which respond to changes in water pressure. Because of sensitivity and
noise issues, responses from a group of 5 to 50 hydrophones are summed to produce a single
seismogram, and the group is considered a single receiver. There are hundreds of such groups in
one streamer. Seismograms produced by all receivers for each shot is called shot record or
common shot gather.

What are Offshore Surveys?


Seismic information is acquired by field crews. They may be almost anywhere on earth-on
cultivated land, in a desert, at sea, in a forest, on a mountain, in a city. The climate may be
tropical,temperate,or arctic. About the only limitation is geological that there should be a
sedimentary basin, or good chance of finding one.

Depending upon the area of the survey there are two type of survey

• Survey on Land – Onshore Surveys


• Survey in Oceans- Offshore Surveys

Shallow Offshore ---- < 200m water Depth

Deep Offshore --- 200m-700m

Ultra Deep Offshore --- > 700m

Offshore and marine are the interchangeable word

Seismic Vessel
For the seismic surveys, a specially equipped vessel is used, operating as acomplete, self contained
geophysical laboratory, carrying all equipment and supplies necessary for round the clock
operations.

BANARAS HINDU UNIVERSITY Page 4


2D Offshore Seismic Data Processing 2013

Fig 2.1 seismic vessel

Instrument room
The vessel is also equipped with all necessary communication equipment forefficient operations anywhere
in the world. The instrument room is the heartof the seismic vessel. Here is where all the instruments for
recording the seismicdata, control of the seismic source and the hydrophone streamer and the
mainnavigation equipment are located. Links between these units are essential foroperationof all seismic
surveys, and having all personnel in one locationsimplifies the communication betweenthe operators of the
various instruments.Today’s seismic vessel is also equipped with a largecomputer centre forprocessing of
the recorded data.

Back deck
The back deck is another important part of the seismic vessel. A large openspace is required for handling of
the airguns and the hydrophone streamers. Thishandling included deployment and recovering the
equipment, as well asnecessary maintenance and repair. The back deck arrangement will vary fromvessel
to vessel, but in principle they are all very similar. The airgun arrays are configured as long strings, and
when not in the water they are hanging formbeams at the back deck roof. The streamers are stored on
large reels, with additional reels available for spare streamer parts or even whole streamers. In addition to
the storage facilities, the back deck contains special equipment for deployment and recovery of both airgun
arrays and streamers. The safety aspect is always given careful considerations, in order to have safe
operations even in adverse weather conditions.

Compressor room
The large compressors needed to supply the airgun arrays are often located close the engine rooms,for
ease of operation by the onboard mechanics. Onboard machinery is of critical importance to a seismic
vessel. Not only is large power needed in order to tow all the in-water equipment; but also noise generated
by the engines and propellers also need to be as low as possible. With purpose built seismic vessels great
care is taken into the overall design of vessel hull and machinery to ensure quiet operations.

BANARAS HINDU UNIVERSITY Page 5


2D Offshore Seismic Data Processing 2013

Generation of seismic energy under water

Marine sources
There are several types of marine seismic surveys used as given in figure-3 below, but present day marine
sources will be confined to air guns and water guns, rest are no longer popular but still used to fill
specialneeds, as well as some presently under development

Desirable characteristics for the sources in all the seismic surveys are that source should provide maximum
output s/n ratio, high output energy, better resolution, minimum disturbance to environment there are
many reasons for the great decrease in the use of explosives, explosives was seldom fired from the ship
that contained recording equipment and towed the receiving cable because any misconception about
location of the charge under the water surface could result in the loss of cable costing more than $100,00
or even the destruction of recording ship another reason is for abandoning danger destroying fish by
dynamite. Source should low capital and maintenance cost, convenience of resupply, should be
reliable.preferentially we use air gun as marine seismic source for deeper investigations because with this
we get low frequency waves .In offshore seismic source is used to introduce a sudden positive (or

BANARAS HINDU UNIVERSITY Page 6


2D Offshore Seismic Data Processing 2013

sometimes negative) pressure impulse into the water. This impulse involves a compression of the water
particles, creating a shock wave that spreads out spherically into the water and then into the earth. A
delayed effect of the shock wave is an oscillatory flow of water in the area around the explosion, which give
rise to subsequent pressure pulses designed as bubble oscillation

Air gun
It is a simple mechanical device that stores compressed air and releases it through small ports when a firing
command is released.The ports are opened by either an external movable piece called a sleeve or an
internal movable piece called a shuttle. When an airgun fires, the energy contained in escaping compressed
air is converted to sound, thereby generating a seismic signal that travels in to the earth.Air reservoir
ranges from 30 to 800 cubic inch.

It was also used for research on sound transmission in the ocean. Until the 1980’s it is manufactured in a
number of models with capacities ranging from 1 to 2000 in3 or more of air and typically operates at a
pressure of about 2000lb/in2

BANARAS HINDU UNIVERSITY Page 7


2D Offshore Seismic Data Processing 2013

Air gun array


Air Gun Arrays are used to suppressed the bubble effect.Consists of 2-6 sub-arrays, each being a linear
alignment of 4-8 individual guns. Collection of these air guns that are fired simultaneously (usually 10-20 or

more in 1 ms).

Fig 2.4 array

Formation and properties of the gas bubble in water shooting

Bubble effect
In offshore work, the most distinctively water-type problem is the formation of bubbles. An explosion is just
a very sudden expansion of something. Usually a solid or liquid becomes gas, or a gas is released from
confinement in either of these cases, there is suddenly, a rapidly expanding gas.

When the explosion takes place under water, the bubble pushes water, ahead of it as it expands. The water
immediately ahead of the gas moves at a high speed after some short period, the expansion is no longer
powerful enough to move the water at that speed. But fast moving water can’t stop immediately. Its
momentum causes it to continue to move outward from the site of explosion. When the momentum is
used up, the bubble of gas is overextended, so the weight and pressure of water causes it to contract. The
leading water moves inward so rapidly that its momentum causes it to go so far inward that it re
compresses the gas. The compressed bubble then expands again and so on, in diminishing expansion and
contraction.

BANARAS HINDU UNIVERSITY Page 8


2D Offshore Seismic Data Processing 2013

The second expansion is sudden, like the original explosion although not as strong. To the seismic
instruments and on the recording it appears as though a second shot, not quite as strong, was fired. But of
course it isn’t a convenient time for another shot. Reflections from the first one are still being recorded. So
the two set of reflections, from the first and second expansions, are all mixed together. And third and later
expansion and their reflections are also mixed in. In the various shooting methods used, there are several
techniques to reduce the effect of the later expansions relative to the first one, both in special design of the
source, and in processing

From figure 2.2 we can see that as the shock front progresses outward, its pressure and particle velocity
continues to decreas. Figure 2.2 shows both quantities as function of distance from poin of detonation at a
time of 630µs .at this time the the gas bubble is 2 ft in diameter, and shock front is a spherical surface
having a radius of 6 ft. The water pressure at the outer edge of the front is now 16,000 lb/in2. Just inside
the shock front the pressure decreases in the direction of the source, reaching a minimum at a distance of 2
ft from the detonation point

The particle velocity in water consist of two components:

• The outward directed compressive flow of water required to fill the rarefaction left behind the
shock front which transports water under compression away from the source, and
• The afterflow which supplies water to accommodate the tangential expansion that occurs as the
shock front travels. The afterflow represents a production of kinetic energy which is converted into
a pressure wave when the outward flow of water is reversed

Because of momentum, the gas bubble continues to expand until 200 ms after the shot, at which time its
radius is about 10 ft .the pressure inside the bubble is now only 2 lb/in2, which is 35 lb/in2 below the

BANARAS HINDU UNIVERSITY Page 9


2D Offshore Seismic Data Processing 2013

ambient hydrostatic pressure. Here the expansion stops and contraction begins. The rapid shrinking of the
bubble causes an increasing inward velocity of the water and a rapidly increasing pressure in the
contracting bubble. At 400 ms the bubble has collapsed to its smallest diameter and highest pressure, and
expansion starts again

Figure 2.4 demonstrates this cycle of bubble oscillation the depth of the bubble stays almost constants,
while its diameter is large because the resistance of the water above it inhibit upward motion. When the
bubble diameter is smallest, the water resistance is the least and the bubble rises at its greatest rate of
speed.

The shape of the bubble is not perfectly spherical because, in the case of explosives the itself is not
spherical and is not detonated from its centre. While in the case of air guns the air is discharged in
preferential direction through ports. Furthermore, the pressure of the water body is less at the upper parts
of the bubble than it is at the bottom.

Receiving Devices/Sensors
The sensors used on the seafloor typically comprise hydrophones, and 3-component geophones. A
hydrophone measures only pressure and most do not measure the direction from which a pressure pulse
arrives. Geophones measures ground motion and direction from which P-waves and S-waves arrive. Using,
three geophone components aligned in orthogonal direction; the vertical direction, the horizontal
direction, and the other horizontal direction that is at right angle to first one measurement can be made in
each of the three dimensions of space.

• HYDROPHONES
• SEISMIC STREAMER
• ASSOCIATED DEVICES

BANARAS HINDU UNIVERSITY Page 10


2D Offshore Seismic Data Processing 2013

Hydrophone
Hydrophone is an electro acoustic transducer. It is used to Converts pressure pulse in to voltage by means
of an piezoelectric crystal.Acceleration due to tow is cancelled out and only orthogonal component of
compression produces signal.This helps to provide better S/N ratio Hydrophones are summed in series to
improve their voltage outputHydrophones are the pressure sensors.Pressure is proportional to the velocity
of the water particles. Converts pressure pulse in to voltage by means of an piezoelectric crystal.The
generated voltage is proportional to the instantaneous water pressure associated with the seismic signal.

Fig2.7 Hydrophone

Streamer
Streamers are basically transparent plastic tubes, with dia. 2.5 to 3 inches and filled with oil (kerosene).The
hydrophones, wires and transformation wires are kept inside the tube.There are steal wires inside the tube
to provide the mechanical strain to the tube.A streamer may be in length 3000m to 12000m having 96 to
480 channels Streamer is made up of Lead-in Section, Stretch section, Live Section ,Tail buoy, Birds etc.

Streamer cable is formed by connecting together subunits called sections. A section is typically 12.5 m to
100m long with end connector couplings. In each section 15 to 100 hydrophones are connected to form 2
to 8 receiver groups Steel wire stress members join plastic bulk heads together and hydrophones are
suspended between the bulk heads in light kerosene oil. Each streamer section is housed in a clear PVC
skin. Total streamer length can be as long as 12 km

BANARAS HINDU UNIVERSITY Page 11


2D Offshore Seismic Data Processing 2013

Fig 2.8 streamer

Streamer in seismic vessel we have winch for hauling and lifting consisting of a rope/streamer winding
around a horizontal rotating drum turned by a crank or by motor

Fig 2.9 Winch

Associated devices with streamer


• DIGITIGING MODULES
• LEAD-IN CABLE
• STRETCH SECTION
• TAIL BUOY
• DEPTH CONTROLLER BIRD
• RECOVERY DEVICE
• STREAMER NOISE

Lead-in cable

Lead- in connects the streamer to the cable reel.Lead-in section has a solid core. A flexible metal sheath
provides protection against damage from vibration and rubbing where the lead-in contacts the ship stern,
back deck and cable reel.The lead-in is also capable of withstanding high pressure.

BANARAS HINDU UNIVERSITY Page 12


2D Offshore Seismic Data Processing 2013

Digitizing Modules

In sea electronic modules thesehese digitizing modules Fixedbetween


Fixed two/more streamer sections. Used for
Analog-to-digital
digital conversion and also extensive
exten use of Digital signal Processing (DSP) technology allow
digital filtering and signal processing functions to be carried out in the module. 24 bit digitization is in use.
Each module can take 8-12 12 channels.

Fig 2.10 Digitizing module

Stretch section

One or more stretch sections


ons (50 to 150m) are installed behind the lead-in
lead in cable. These help in Attenuate
cable noise caused by the vibration and jerk from the ship. The stretch section has a vinyl jacket and
contains kerosene.Uses ropes instead of wire stress members.

Fig 2.11 stretch section

BANARAS HINDU UNIVERSITY Page 13


2D Offshore Seismic Data Processing 2013

Tail Buoy

The principal purpose of the tailbuoy is to provide a reference on the cable position.A flasher is attached to
it for an indication of the tail end of the streamer. Used to ensure cable has straightenedout. It may also
help indicate cable drift (feathering)Now GPS is being fixed on tailbuoy to ascertain the position of tail end
of the cable. Tail buoy also serves as a marker for vessels passing astern .Also aids in retrieving the cable if it
is cut.

Fig 2.12 tail buoy

Neutral Buoyancy

Kerosene is lighter then sea water. A streamer filled with enough kerosene to counter its weight can be
placed at a particular depth and will remain there.Ideally the cable should be neutrally buoyant such that a
mechanism may guide the streamer to the required depth.

Depth Controller

Streamer is towed at a particular depth say 7-8 mtr.The depth is controlled by attaching a winged device
called ‘birds’ controlled by spring tension preadjusted to make them operate at a particular depth.At the
particular depth the water force on wings is balanced by the spring tension. If cable drops to lower depth or
there is decrease in cable depth, the angle of wings change accordingly to bring the streamers back at the
target depth.The birds contain active control element. The wings are attached to a water piston, tension
spring mechanism.The desired depth is also controlled individually by a command transmitter in ship’s
instrument room.The birds are powered by rechargeable/alkaline/lithium batteries

BANARAS HINDU UNIVERSITY Page 14


2D Offshore Seismic Data Processing 2013

Fig2.13 Bird

Recovery Device (SRD)

Operation of the device is simple and effective recovery devices are installed at regular intervals along the
streamer (300 meters maximum recommended).If a streamer is severed or becomes detached from the
tow vessel and sinks, the recovery device automatically activates at 30-35 meters of seawater and releases
compressed CO2 into a flotation bag.After the SRD bag inflates, the buoy floats the streamer to the surface
for recovery

2 D marine surveys
In 2D surveys one hydrophone streamer is towed behind the survey vessel, together with a single source.
The reflections from the subsurface are assumedto lie directly below the streamer/source.This produces a
vertical slice or 2Dimage of the geology below the source. Typical 2D surveys are designed to cover wide
areas and provide a broad understanding of the subsurface geology. Survey duration varies from several
days to months. The processing of the data is less sophisticated than that employed for 3D surveys. 2D data
can often be distorted with diffractions and events produced from offline geologic structures, making
accurate interpretations difficult.

BANARAS HINDU UNIVERSITY Page 15


2D Offshore Seismic Data Processing 2013

Fig 2.14 figure showing 2D acquisition

3D marine surveys
3D reflection seismology differs from 2D profiling by the fact that data is gathered over a surface and not
along a line. The data is processed into a cube, subdivided into bins formed by in lines and cross lines.
Modern 3D surveys are carried out using dual source and multiple streamers. For 3D surveys, the CMP
becomes two dimensional and is termed a bin.Pre-plots are prepared in view of number of streamers,
sources and bin size. Sail lines and receiver lines are defined. The data is acquired by running the source on
pre-plots (sail lines).Data is gathered over a surface and not along a line (more practically along more than
1 streamer) Carried out using dual source and multiple streamers (initially 2 to 4).

Fig 2.15

BANARAS HINDU UNIVERSITY Page 16


2D Offshore Seismic Data Processing 2013

Objective:

3D surveys are typically used only after the presence of hydrocarbons has been established and is rarely
used in newer areas. Since all geological structures of interest are 3D in nature the best approach of the
true image of the subsurface is by surveying in 3D.For better mapping of reservoirs? To know the exact
location for drilling. Forbetter
better resolution.

Bin :

Bin is small rectangular/ square area of a target. All mid points falling in this area are assume to belong to a
bin. and all this traces are CMP stacked and determine the fold of that bin.

Bin Size

x = V /(4*f*sin b)

V= velocity at target horizon

f= maximum frequency

b = dip in degrees

fig 2.16 bin

Ocean bottom cable acquisition


Receivers placed on the sea floor. Hydrophones used to measure pressure in the water (P- (P
waves).Geophones used to measure vertical particle motion (P Waves) and horizontal particle motion (S
(S-
waves). May be performed 2D, 3D along with Multi-Components.
Multi

There are three typical types of seabed recording system used in marine seismic:

Ocean Bottom Seismometers (OBS), 2-component,and


2 4-component. 2-component
component and 4-component
4 are
generally recorded using cables laid on these bed, although there are systems which use Remotely
Operated Vehicles(ROVs) to deploy the sensor nodes (which may or may not be connected by cables)
placed on the seafloor. A geophone measures the velocity of the particle displacement, an accelerator, as
its name implies, detects the acceleration of the particle displacement,
displacement, and a hydrophone detects change in
pressure.

BANARAS HINDU UNIVERSITY Page 17


2D Offshore Seismic Data Processing 2013

Fig2.17 ocean bottom cable survey

Feathering angle
If the feather and shooting direction would be similar for each line then the bins would still distributed
CMPs. But, since currents and swell changecontinuously, the amount of feather varies not only for each line
but alsoImaging of Sea Subsurface Using 2D Seismic Data Page 23between streamers. Therefore, the shape
of the streamers is continuouslytranslated to subsurface coverage foreach shot and displayed in real-time
sothat the steering of the vessel can be altered to maintain the highest possiblecoverage. The minimum
amount of coverage is defined in the surveyspecifications as a fold percentage for the near, mid, and far
traces. The mainemphasis on the near traces, where a fold specification of at least 90% is notuncommon.
With increasing offset the feathering effect becomes more severeand therefore the specifications for the
mid and far-trace coverage are usuallymuch less. Even so, it may still be impossible to fulfil the
coveragerequirements for the large offsets due to feathering. Shown in figure (a) &(b).

BANARAS HINDU UNIVERSITY Page 18


2D Offshore Seismic Data Processing 2013

Fig 2.18 showing fether angle

NAVIGATION & POSITIONING


Navigation & Positioning is essentially requiredTo navigate the vessel on pre plotted lines To be able to fire
the guns at shot locations.To know the shifting of the receivers from actual position ( Due to Ocean
Currents).To be able to perform repetitive surveys i.e. 4D. Determining location:Locating the Ship position
at sea where there are no landmarks depended mainly on radio positioning and satellite observations, with
reliance on the global positioning system (GPS).Marine seismic navigation involves two aspects

a. Placing the ships at a desired position

b. determining the actual location afterwards so that the data can be mapped properly

BANARAS HINDU UNIVERSITY Page 19


2D Offshore Seismic Data Processing 2013

Fig. 2.19 showing how to navigate seismic vessel

Integrated Navigation System

An integrated navigation system is most crucial element of a seismic survey operation.It takes input from a
numberer of navigation and position sensors and communicates with vessel’s navigation, Source
Synchronizer and Data recorder.The following sensors are used for in

• DGPS
• Remote Reading Magnetic compass
• Acoustic pingers
• Echo sounder

BANARAS HINDU UNIVERSITY Page 20


2D Offshore Seismic Data Processing 2013

CHAPTER 3
SEISMIC SIGNAL AND NOISE
The quality of seismic data varies tremendously from areas where excellent reflections are obtained to
areas in which the most modern equipment, complex field techniques, and sophisticated data processing
do not yield usable data. In between these extremes lie most areas in which useful results are obtained, but
the quantity and quality of data could be improved with beneficial results.

We use the term ‘Signal’ to denote any event on the seismic record from which we wish to obtain
information. Everything else is ‘Noise’ , including coherent events that interfere with the observation and
measurement of signals. The ‘signal to noise ratio’ ,abbreviated S/N ratio is the ratio of signal in a specified
portion of record to the total noise in the same portion. Poor records result whenever the signal-to-noise
ratio is small.

CLASSIFICATION OF NOISE:
Coherent Noise: It can be followed across at least a few traces. It includes

• Cable Noise
• Air-blast
• Guided Waves
• Multiples and Reverberations
• Side Scattered Noise
• Swell Noise

Incoherent or Random Noise: It is dissimilar on all traces and we can not predict what a trace will be like
from the knowledge of nearby traces. It includes

• Shot generated noise


• Man-made noise
• Wind noise
• Spikes
• Depth Controller Noise

o Water flow turbulence along the streamer.

o Birds are placed away from live hydrophones

• Poor Ballast Noise

o If streamer is poorly ballasted, the birds will tilt their wings to maintain the desired depth.
As a result local turbulence is generated to produce noise.

BANARAS HINDU UNIVERSITY Page 21


2D Offshore Seismic Data Processing 2013

• Rough Sea Noise

o Sea swells cause up-welling


up and down-drafting
drafting of volumes of sea water. This turbulence
raise up or drops down the live streamer sections.

o Individual live sections are moved relative to adjacent sections.

o High amplitude noise bursts


bursts are observed due to this phenomenon.

• Ship Propeller Noise

o Propeller noise may travel horizontally through the water column and be recorded by the
cable.

o Alternatively, if a shallow hard seabed exists, a propeller beat frequency can be setup by
the pressure
ure wave from the propeller being reflected back to the surface.

Cable Noise:
Cable noise is one other type of coherent noise that manifests itself in the form of low-frequency
low linear
events with very large stepout as seen on the shot records in Figure 3.A. Note the increase in the energy
level of the cable noise as the water depth becomes shallower. A low-cut cut filter often removes the cable
noise from shot records.

Fig 3.1:: Shot records from a marine 2-D


2 line with cable noise.

Air-Blast:
Another form of coherent noise is the Airwave, which
which has a velocity of 300 m/s. It can be a serious problem
when shooting with surface charges. Notch muting is the only way of removing them. Power lines also give
rise to noisy traces in the form of a mono frequency wave (50 or 60 Hz).

BANARAS HINDU UNIVERSITY Page 22


2D Offshore Seismic Data Processing 2013

Guided Waves:
Guided waves manifest themselves as dispersive linear noise on both common-shot
common and CMP gathers, but
are attenuated largely by stacking. Guided waves are trapped in a water layer or in a low-velocity
low near-
surface layer and travel in the horizontal direction.
directio They are dispersive — each frequency component
propagates with a different phase velocity, and are best described by normal-
normal-mode propagation. Since they
do not contain any useful reflection energy, guided waves usually are muted on CMP gathers. When one
mode splits away from the rest of the guided wave packet and travels at lower speeds, and thus overlaps
with reflection events, then dip filtering in the f – k domain is needed. The dispersive nature of guided
waves can vary along a seismic traverse depending
depending on water depth and water-bottom
water conditions. The
shallower the water depth and the softer the water bottom, the more the dispersion and splitting of modes
associated with guided waves.

Fig 3.2 A shot gather containing predominantly guided waves.

Side-Scattered Noise:
Side-scattered
scattered energy has a large moveout range depending on the position of the scatterer acting as a
point source at the water bottom with respect to the position of the recording cable. Side-scattered
Side energy
manifests itself with varying moveout on common-shot
common shot gathers , and is not apparent on CMP gathers, but
reappears as linear noise on stacked sections. Side-scattered
Side scattered energy stacks at high velocities along the
linear flanks of its traveltime curve. We then anticipate that the linear noise seen on a stacked section,
particularly at late times, most likely is scattered energy along the flanks of its traveltime curve, stacked
together with high-velocity
velocity primary energy. Linear noise associated with side scatterers is recognized easily
on time slices from a 3-DD volume of stacked data

BANARAS HINDU UNIVERSITY Page 23


2D Offshore Seismic Data Processing 2013

Fig 3.3 side scattred noise

Note in Figure below the circular patterns expanding out from the source of a series of point scatterers at
the water bottom. In this case, certain parts of the sea-bottom
sea bottom pipelines act as point scatterers.

Fig 3.4 Circular patterns of side scattered energy.

BANARAS HINDU UNIVERSITY Page 24


2D Offshore Seismic Data Processing 2013

Swell Noise:
Swell noise manifests itself on shot records in the form of low-frequency vertical streaks (Figure 3.E.). This
type of noise arises from rough weather conditions during marine seismic recordings, especially in shallow
waters. A low-cut
cut filter often removes the swell noise from shot records.

Fig 3.5 A CMP gather with predominant swell noise.

Multiple Reflections or Multiples:


Multiples are another type of coherent noise. Multiples are the secondary reflection with interbed and
intrabed ray path. They propagate both in sub and super-critical
super critical regions. The reflection time of a multiple is
the reflection time of the primary plus the
the time spent in the extra bounce or bounces. Multiple are stacked
by two methods, which are based on move out discrimination and prediction theory, which uses the
periodic behavior of multiple. The most effective is move out based suppression technique often of in CMP
stack with inside trace mute. Prediction theory is particularly effective in the slant stack domain. Multiple
reflections can be distinguished in two classes such as “Long-path”
“Long path” and “Short-path”
“Short multiples.Long-path
multiples: A long path multiple
multiple is one whose travel path is long compared with primary reflections from the
same deep interfaces, and hence long-path
long path multiples appears as separate events on a seismic records. The
strongest long-path
path multiples involve reflection at the surface, the seafloor
seafloor or (land) the base of the low-
low
velocity layer, where the reflection coefficients is very large because of the large acoustic-impedance
acoustic
contrast. Because this type of multiple involves at least two reflections at depth, its amplitude depends
mainly on the
he magnitude of the reflection coefficients at depth and multiples of this type will be observed
as distinctive events when these coefficients are abnormally high. Weaker long-path long multiples may be
observable where primary energy is nearly absent at the time time of arrival of the multiple energy. Because
velocity generally increases with depth, multiples usually exhibit more normal moveout than primary
reflections with the same travel time.Short-path
time.Short multiples: Short-path
path multiples arrives as soon after the
associated
ociated primary reflection from the same deep interface that it interferes with and adds tail to the

BANARAS HINDU UNIVERSITY Page 25


2D Offshore Seismic Data Processing 2013

primary reflection, hence, it’s effect is that of changing wave shape rather than producing a separate
events. These can be categorized into:

Peg-leg multiples
Short-path multiples that have been reflected successively from the top and base of thin reflectors on their
way to or from the principal reflecting interface with which they are associated. These peg-leg multiples
delay part of the energy and therefore lengthen the wavelet and effectively lower the signal frequency as
time increases.

Fig 3.6 : PEG - LEG MULTIPLE

Ghost: Some energy from the source will necessarily go up to strike the free surface and reflect. This is
termed as ghost reflection. Since the direct down going pulse will be quickly followed by an inverted copy
of itself. The reflection events we see will include the primary reflection and all combination of ghost
reflection.

Surface layer multiples:When the multiple bounces from the free surface at the top of the earth
model. Any of the events will necessarily have longer travel time than the primary reflection and thus
exhibit greater moveout. Thus they appear to be associated with lower average velocity than the primary.

Fig 3.7 showing different multiples

BANARAS HINDU UNIVERSITY Page 26


2D Offshore Seismic Data Processing 2013

A. Water-bottom multiples of first and second order


B. Free surface multiples of first and second order
C. Peg- leg multiples of first and second order
D. Intrabed multiples of first and second order
E. Interbed multiples of first and second order

Reverberations:
Reverberations are also called singing or ringing, which is frequently encountered in marine work. This is
due to multiple reflections in the water layer. The large reflection coefficients at the top and bottom of this
layer result in considerable energy being reflected back and forth repeatedly causing reverberations in the
record.

Attenuation of Coherent Linear Noise:


There are various popular techniques for attenuation of different coherent noises of linear nature. Which
of these is to be applied, depends on the quality of seismic data, processing cost, available softwares and
aims of processing.

Frequency Filtering:
Frequency filtering can be in the form of band-pass, band-reject, high-pass (low-cut), or low-pass (high-cut)
filters. All of these filters are based on the same principle — construction of a zero-phase wavelet with an
amplitude spectrum that meets one of the four specifications. Band-pass filtering is used most commonly,
because a seismic trace typically contains some low-frequency noise, such as ground roll, and some high-
frequency ambient noise. The usable seismic reflection energy usually is confined to a bandwidth of
approximately 10 to 70 Hz, with a dominant frequency around 30 Hz. Band-pass filtering is performed at
various stages in data processing. If necessary, it can be performed before deconvolution to suppress
remaining ground-roll energy and high-frequency ambient noise that otherwise would contaminate signal
autocorrelation. Narrow band-pass filtering may be necessary before crosscorrelating traces in a CMP
gather with a pilot trace for use in estimating residual statics shifts.

Frequency-Wavenumber Filtering (f-k filtering):


A particularly useful tool in seismic data processing is the transformation from time and space (T-X)
domains to the frequency and wavenumber (F-K) domains. This is accomplished by applying a two-
dimensional FFT. A two-dimensional IFFT is used to return from F-K to T-X. Figure 3.I illustrates T-X to F-K
transformation by means of a schematic record. The direct arrival, first break refraction, and source-
generated noise are linear in nature (i.e. – show a single dip) in both T-X and F-K. Signal (reflections) are
nonlinear and have many dips. Thus, in the F-K domain signal tends to lie in an area near K = 0. The F-K
transformed data are displayed in an area between +kN and –kN, and from 0 to fN.wherekN is the Nyquist
wavenumber and fN is the Nyquist frequency.

The lines representing the direct arrival, first break refraction, and source-generated noise are continued
from the right edge of the F-K plane to the left edge because they have been aliased. Note that signal is
crossed by noise in the T-X plane but they are separated in the F-K plane. This provides a way of
discriminating between signal and noise

BANARAS HINDU UNIVERSITY Page 27


2D Offshore Seismic Data Processing 2013

Fig 3.8 f-k filtering

Slant Stacking ( Linear Radon or τ-p Filtering):


A tau-p transform is also known as Slant stack or Radon transform. A linear radon transform aims to
decompose the input into plane waves..

The definition of linear radon transform is given by

m (px,τ)=-∞∫∞S(x, t= τ + x*px)dx

The above equation describes a mapping procedure where the data in x-t domain is summed along the
straight lines with intercept t and time dip px where px is also called horizontal ray parameter given by
sinθ/V. A very interesting geometric relation exists between events in x-t domain and linear radon domain.
A linear radon transform causes a line of in x-t domain to be transformed into a point in tau-p domain
where τ is the intercept of line on time axis and p is the slope of line. This is true for line of infinite extent.
For finite extent line mapping is smeared. A hyperbola in x-t domain is transformed into an ellipse in linear
tau-p domain. A horizontal line in x-t domain will map at p=0 line in tau-p domain.After applying a Linear
moveout with a proper velocity corresponding to that of the linear noise, if we transform the data into tau-
p domain, most of the linear noise will be mapped at the zero value of p. So if we apply a narrow vertical
gate around p=0 and zero this data , then our data will be free of linear noise.

BANARAS HINDU UNIVERSITY Page 28


2D Offshore Seismic Data Processing 2013

(a) A line in x-t domain transformed to (b) a point in τ-p domain.

A line in x-t domain is mapped as point in t-p and a hyperbola is mapped as an ellipse.

Fig 3.9 linear radon filtering

Attenuation of Multiples:
Multiple attenuation methods can be broadly classified into two categories:

• On the basis of move-out and dip discrimination


• On the basis of periodicity and predictability

Multiple attenuation by CMP stacking:


On application of NMO correction using the primary velocity (VP) as in Fig 3.N.b, the primaries are aligned
while the multiples are undercorrected as in Fig 3.N.c. Now the NMO corrected gather is stacked and as the
primaries are aligned they get constructively summed whereas the multiples owing to their moveout gets
suppressed.

BANARAS HINDU UNIVERSITY Page 29


2D Offshore Seismic Data Processing 2013

Fig 3.10 (a) CMP gathers with strong multiples (b) velocity spectrum (c) gather after NMO correction using primary velocity (d)
CMP stack of gather in (c).

However stacking is effective only in the far offsets where the moveout discrimination is more and not in
the near offsets where the multiples are also almost of the same shape as the primaries and are hence
aligned or overlapping with the primaries. To avoid this a inside mute may be applied, but this will seriously
affect the data quality if the S/N ratio is already less. Another approach is to apply a weighted stacking
procedure where smaller weights are assigned to near offset traces and larger weights are assigned to far
offset traces.

Multiple attenuation by predictive deconvolution:


This method is a prediction and subtraction method. A filter could be designed to remove the repetitive
patterns on seismic data under the assumption that reflectivity series is random which means that primary
reflections do not appear in repetitive patterns but multiples do. Primaries are needed to predict multiples
from it. The more periodic is the multiple the better predictive deconvolution works. The periodic
assumption is valid if ray path is vertical within the multiple generating layers. If this assumption fails
predictive deconvolution would not work well. In this case we need to go in some other domain such as
tau-p, in which multiple becomes periodic.

BANARAS HINDU UNIVERSITY Page 30


2D Offshore Seismic Data Processing 2013

Predictive deconvolution basis:


While trying to solve the problem of multiple our aim must be on removing the repetition and not on
reshaping the signature of primary reflections. Consider the water layer case with reflectivity of water
bottom r1 and reflectivity of water/air interface r0. In this case due to high reflectivity ,downward going
wave will be reflected up by water bottom and this upward moving wave will be reflected down by
water/air interface and this process will continue. This phenomenon is called water layer reverberation
which is illustrated in Fig 3.S.

In case of horizontal water bottom and vertical incidence the time period of reverberation is given by ▲t =
2▲z/c where ▲z is water depth and c is velocity in water. Note that every the amplitude of reverberation
changes sign and decays by a factor r1.

Fig 3.11: Water layer reverberations.

If the seismic trace is shifted by ▲t then primary will match to its first order reverberation and each and
each multiple match in time with next order reverberation. If at the same time shifted version of
reverberation is scaled by a factor r1, all the shifted events will match in time and amplitude with original
reverberations. After adding the two only primaries will remain (Fig 3.T). The reverberation free response
p0 (t) is obtained by following equation:

p0 (t) = p (t)+r1p(t-▲t)={ δ (t)+r1 δ (t-▲t)}*p(t)

BANARAS HINDU UNIVERSITY Page 31


2D Offshore Seismic Data Processing 2013

Original series containing primaries


and multiples

Time series shifted by ▲t


&
Shifted time series scaled by - r1

Adding the above two we are left


with the primary only

Fig.3.12:: Predictive Deconvolution procedure under ideal condition.

Surface Related Multiple Elimination (SRME) :


When looking at the ray pathof a surface-related
surface related multiple, it can be observed that a first order multiple can
be thought of considering of two primary paths, that are connected at the surface reflection point. Thus, it
could be possible to combine primary reflections that are already available in the data to construct first-first
order multiples. This is a data-driven
data driven multiple removal method that uses the reflections that are present in
the pre-stack
stack seismic data to construct surface-related
surface related multiples. In this way
wa the need for explicit
subsurface information is avoided. By the use of Kirchhoff summations the correct combination of events
to construct multiples is obtained automatically.

Fig.3.13 : Surface Related Multiple Elimination.

BANARAS HINDU UNIVERSITY Page 32


2D Offshore Seismic Data Processing 2013

Shallow water demultiple (SWD) The new attenuation method for shallow water multiples, SWD, used by
CGG VERITAS is based on the concept of estimating shallow primary reflections from multiples. The idea
can be illustrated by a diagram depicted in Figure 3.9. For a particular shot (S1) fired by the gun, the water
bottom reflection in the near offset, labelled by the path BC, is not recorded because of the gap between
the source and the nearest receiver. The reflection will not be properly recorded either in the farther
offsets because it is in the post-critical range due to the shallow seafloor. Nevertheless, it is possible to
recover the reflection BC because it is embedded in a multiple reflection (indicated by the path ABC) of
another shot S2. Therefore, by reconstructing the water bottom reflection (or other shallow events) from
the multiples in the recorded data, the problem faced by SRME can be resolved.

Fig 3.14 SRME

BANARAS HINDU UNIVERSITY Page 33


2D Offshore Seismic Data Processing 2013

CHAPTER 4
MARINE SEISMIC DATA PROCESSING
Basic Processing Sequence

Flow Chart Of Seismic Data Processing Sequence

Data Loading
Demultiplexing
Reformatting PREPROCESSING

Geometry Merging
TAR
Editing/Filtering
Field Statics
Deconvolution
Brute Stack CDP sorting
Velocity Analysis
NMO Stack RMS Velocity picking
NMO
PSTM Migration
Residual Stack Residual Static Correction
PSTM Stack
DMO
Velocity Analysis
Migration Stack
Migration

POST Stack PRE Stack


Fig 4.1 processing flow chart
Migration Procedure Migration Procedure

BANARAS HINDU UNIVERSITY Page 34


2D Offshore Seismic Data Processing 2013

Seismic data processing is a sequence of mathematical operations, which is carried out to extract the useful
information from the set of raw data recorded in the field. Data processing provides the interpreter a
seismic section, which should represent the geological section. The basic objectives of seismic data
processing are as follows:

• To enhance the signal to noise ratio.


• To generate a representative image of the sub-surface suitable for interpretation.
• To improve vertical and lateral resolution.
• To fulfil the requirement of client (interpreter).

Seismic data processing strategies and results are strongly affected by field acquisition parameters and
input data quality. It is composed basically of five types of corrections and adjustments:

• Time
• Amplitude
• Frequency-phase content
• Data compression (stacking)
• Data repositioning (migration)

These adjustments increase the signal to noise ratio, correct the data for various physical processes that
obscure the desired (geologic) information of the seismic data, and reduce the volume of data that the
geophysicist must analyze. Time adjustments fall into two categories: static and dynamic. Static time
corrections shift a whole trace. The correction is constant over time. Dynamic time correction is a function
of both time and offset and convert the times of reflections into coincident with those that would have
been recorded at zero offset. Amplitude adjustments correct the amplitude decay with time due to
spherical divergence and energy dissipation in the earth. The frequency-phase content of the data is
manipulated to enhance signal and attenuate noise. In this regard a term `deconvolution’ is used
frequently. By applying deconvolution technique, a signal can be compressed, (spiking deconvolution) or
multiples can be attenuated (predictive deconvolution). Migration moves energy from its CMP position to
its proper spatial location. In the presence of dip, the CMP location is not the true subsurface location.
Migration collapses diffractions to foci, increases the visual spatial resolution, and corrects amplitudes for
geometric focusing effects and spatial smearing. Migration techniques have been developed for application
to pre-stack data, post-stack data or combination of both. The velocities of seismic waves in the earth can
be derived from seismic data or can be measured in wells, and they are used to convert the known
reflection times into estimated reflector depths. The recent advancements in digital time series have been
playing a crucial role since the last three decades in the seismic signal analysis and interpretation of
reflection data. The improvements in highly sophisticated data acquisition and processing techniques along
with the revolutionary advancement in computer technology are responsible for latest know-how and
present state-of-the-art for the exploration of oil and natural gas.

SEISMIC DATA PROCESSING CAN BE DIVIDED INTO -


• Pre-processing
• Main Processing

BANARAS HINDU UNIVERSITY Page 35


2D Offshore Seismic Data Processing 2013

Pre-processing
The seismic signal received at detector is a continuous representation of ground motion. For digital
recording purpose the continuous analog signal must be sampled at discrete time intervals i.e. analog signal
should be digitized. The sampling interval depends on the resolution desired to achieve interpretation
objectives. Usually it ranges from 1 to 4 ms. Older system did not have a separate analog to digital
converter for each channel or enough writing capacity to save all the data for one shot. To solve this, first
pre-processing step done is demultiplexing. This is done in the field itself.

1. De-multiplexing

All the values at separate channels are sampled for each time sample after which all the values of next
time sample are sampled. The data are not ordered for each channel (e.g. channel1, channel2 etc.) but for
each time sample (e.g. time sample 1-all channels, time sample 2-all channels etc.). Thus data was recorded
in field in multiplexed form i.e. in time sequential form. For processing, the data should be in trace
sequential form i.e. all samples of a single channel in consecutive order. First step in pre-processing is to
arrange data from time sequential form to trace sequential form, which is called as demultiplexing.

Time (ms.) Channel1 Channel2 Channel 3 Channel4

0.000 A001 B001 C001 D001

0.001 A002 B002 C002 D002

0.002 A003 B003 C003 D003

0.003 A004 B004 C004 D004

Multiplexed order:

A001, B001, C001, D001

Demultiplexed order:

A001, A002, A003, A004

NOTE:

At present data comes in demultiplexed form from the field. Data comes in the SEG D format, which is the
standard recording format on the field. Field data also comes in SEG Y format, which is an interchangeable
format and can run in most of the softwares. The typical sampling interval used in field is 2 ms.The number
of channels used in 2D recording is approximately 200 and that used in 3D is 25000. The record length of
onshore data is generally 5 to 6 seconds and that of offshore data is 5 to 12 seconds.

BANARAS HINDU UNIVERSITY Page 36


2D Offshore Seismic Data Processing 2013

2. Reformatting:
Generally the data coming from the field may be in SEG-D or SEG-Y format. Whatever may its initial format
may be, we need to convert the data into a conventional form in accordance with the software. Hence we
have to reformat the data from the tapes into an internal format as required for the software with which
we are working.

3. Data merging/field geometry generation

The field geometry has to be incorporated with the seismic data based on surveying information;
coordinates of different parameters are stored for each trace in trace header. The information stored in
each trace header,initially, is file number(FFID), trace number, sampling interval, and record length.
Information about shot and receiver location are properly handled based on the information available in
SPS file or navigation file. SPS stands for shell processing support which is a type of text file containing
information about shot, charge size, uphole time, coordinates (location), static correction, file number of
shot, depth of shots, elevation etc. Each SPS file is a text file and a combination of three files i.e. S file (for
shot), R file (for receiver) and X file (relation between shot and receiver). Each S file contains the
information related to shot, its depth, charge size, picket number and location etc. Similarly R file contains
information about receiver, its location in terms of coordinates, picket number and elevation etc. X file
represents the information between shot file and receiver file.

Fig 4.2 Raw data from the field and SPS file.(Processing Tutorial ,Paradigm Software)

BANARAS HINDU UNIVERSITY Page 37


2D Offshore Seismic Data Processing 2013

Fig 4.3 traces before and after geometry merging.

4. Trace editing

Trace editing is used to remove the unwanted data from the given field record. While trace editing the
trace is examined to detect, correct or reject those traces which are erroneously recorded,
exceptionally noisy or which contains monofrequency signals like power transmission line. Traces are
edited for polarity reversals, instrument noise, cable noise, leakage, dead traces etc. Muting is a form of
editing, which involves setting the amplitudes to zero in any undesirable part of trace, and leaving the
rest as it is. It can be done to exclude direct arrivals, first break etc.

Before Editing After Editing Difference

Fig 4.4: traces before and after editing.

BANARAS HINDU UNIVERSITY Page 38


2D Offshore Seismic Data Processing 2013

5. Amplitude Gain recovery:

A gain recovery function is applied on the data to correct for the amplitude loss due to wave front spherical
divergence. This amount applies to a geometrical spreading function which depends upon travel time, using
an average primary velocity function. Usually an exponential gain function is used to compensate for
attenuation losses.Seismic waves emitted by a source array travel to a subsurface reflection point then
back to the receiver array. Several phenomenons influence the wave amplitude including source directivity,
geometric spreading and various forms of attenuation and receiver directivity. The observed amplitudes are
a composite of these unwanted effects and reflection coefficients. The reflection coefficients contain
information about impedance contrast in the subsurface so we should preserve reflection coefficients.

Since for homogeneous earth model, wave amplitude decay by the factor of 1/r, where r is the radius of the
wave front. For a layered earth amplitude decay can be described approximately by 1/[v2(t)t], where t is
the two way travel time and v(t) is the root mean squared velocity of the primary reflections. Therefore,
the gain function for geometric spreading compensation is defined by Eqn. 1

= .

Where, v0 is the reference velocity at specified time t0. Besides ambient noise, coherent noise in the data
may be boosted so by using the primary velocity function in correcting for geometric spreading, the
amplitudes of the dispersive coherent noise and multiples have been overcorrected. To prevent
overcorrection of amplitudes of multiple reflections, a velocity independent scaling function, such as

= .

Where α usually is set to 2, can be used for geometric spreading correction. This is known as t-square
scaling for geometric scaling correction (Yilmaz O.Z., 2001). Absorption effects are corrected by exponential
function.

Fig 4.5 Gain recovery before and after

BANARAS HINDU UNIVERSITY Page 39


2D Offshore Seismic Data Processing 2013

Muting:

In reflection seismology, we wish to display only reflections and exclude allother events andinterferences.
On a seismic reflection record some of the mostprominent interferences are the P-waves. These are
compressional waves thathave travelled directly from the source to the receiver or that havebeenrefracted
and have travelled horizontally in a layer. Being direct or refractionevents they are not curved as are
reflections. The velocity of the layer in whichthey propagate horizontally can be determined by measuring
the slope (Δd/Δt)of the event.The refracted-waves must be eliminated from the seismic records before
thesetraces can be stacked. This is done by muting the traces, i.e. by zeroing outthose portions of each
trace that contain refracted- waves. Thus the seismicsection except reflection is surgically muted and
polarity reversals are corrected(before the traces are stacked). This is known as muting.

Fig 4.6 muting

Sorting:

Initially, we get the recorded de-multiplexed data in shot-order mode. Energyemanated from a source is
recorded by a number of hydrophone groups, whichin turn are fed to the field recording unit for digital
recording. After geometryupdation of these traces based on header keys, the traces can be sorted in
anyrequired form.Sorting means collecting or grouping traces from different records which havesome
common characteristics. The traces can be grouped into one or twofollowing ways (as displayed in Figure ) -

• Common Depth Point (CDP)/ Common Mid-Point (CMP) Gather


• Common Receiver Gather
• Common Shot Gather
• Common Offset Gather

BANARAS HINDU UNIVERSITY Page 40


2D Offshore Seismic Data Processing 2013

Brute Stack

After CMP sorting, the horizontal summation of all the traces in each CMP gather is taken which gives Brute
Stack section. If there is any error in the geometry of the array, it can be seen in the brute stack section. It
also gives the rough idea about primary and multiple reflections. The number of traces in each CMP gather
is called the fold of stack. Brute stack section also gives the foldageat its CMP. The advantage of Brute stack
is to choose test panel for deconvolution testing. For deconvolution application that part of the brute stack
section is taken where reflector is looking continuous and S/N is high. This gives the deconparameters,
which become applicable on whole of the data.

6. TRACE BALANCING:

To bring all the input data amplitudes in a specific range (necessary for display), amplitude scaling is done.
A separate balance factor is computed for and applied to each trace individually. Now-a-days, surface
consistent amplitude balancing is in use.

MAIN PROCESSING:
There are three main steps in processing of seismic data- deconvolution, stacking and migration of seismic
data in the usual order of their application. All the other processing steps are considered to be secondary in
a way that they help to improve the effectiveness of these three. Deconvolution acts along the time axis
and removes the basic seismic wavelet from the seismic trace and hence increases temporal resolution.
Deconvolution achieves the goal by compressing the basic seismic wavelet. Stacking is a process of
compression. In particular the data volume is reduced to the plane of midpoint, time at zero offset first by
applying normal move out correction to traces from each CMP gather, then by summing it along the offset
axis resulting in a stacked section. Migration is applied to the stacked data. It collapses diffractions and
maps dipping events to their true subsurface locations. Migration can also be called as spatial
deconvolution process that improves spatial resolution.

The secondary steps assist the primary processing steps in the following ways:

• Dip filtering may be needed to apply before deconvolution to remove coherent noise so that
autocorrelation estimate is based on reflection energy that is free from such noise.
• Wide band pass filtering may be needed to remove high and low frequency noises.

BANARAS HINDU UNIVERSITY Page 41


2D Offshore Seismic Data Processing 2013

• Before deconvolution correction for geometric spreading is necessary to compensate for the loss of
amplitude caused by wavefront divergence.
• Velocity analysis, which is an essential part of stacking, is improved by multiple attenuation and
residual static correction.

1. Deconvolution

Deconvolution is a process that improves temporal resolution of seismic data by compressing


the basic wavelet in the recorded seismogram, attenuating reverberations and short period
multiples. It thus, yields a representation of sub-surface reflectivity. Deconvolution acts as inverse
filter, applied to the seismic trace for removing the convolution effect of the earth, which has
imparted on the seismic wavelet. i.e.; reverses the effect of earth filter.

Objectives of Deconvolution:

Objective of deconvolution is to design an inverse filter to compress the seismic wavelet and to
produce output trace with a predetermined phase. (Yilmaz,2008)
(i) Compress wavelets.
(ii) Attenuate ghost, reverberation and multiple reflections.
(iii) To remove the filtering effect of earth.
(iv) Stabilizes the wavelet.
(v) Deconvolution simplifies the interpretation.
(vi) To deduce the earth’s reflectivity series.
To design a deconvolution operator we require following informations:
• We must know the initial wavelet shape before an inverse filter is designed to
shorten the wavelet.
• The position of the reflector must be known before a filter is designed to shift
the wavelet produced by it.

The basic assumptions made for deconvolution are as follows:


(i) The earth is made up of horizontal layers of constant velocity (Transversely isotropic).
(ii) The source generates a longitudinal plane wave that impinges on layer boundaries at
normal incidence.
(iii) The source wave form does not change as it travels in the surface.
(iv) The noise component n(t) is zero.
(v) The source waveform is known.
(vi) Reflectivity is a random series.
(vii) Seismic wavelet is minimum phase. (Yilmaz, 2008)

But in real earth situation we have:


(i) Subsurface does not necessarily consists of horizontal layers.
(ii) Source wavelet is not stationary.
(iii) Noise component is not zero.

BANARAS HINDU UNIVERSITY Page 42


2D Offshore Seismic Data Processing 2013

(iv) Source wavelet is unknown.


(v) Spectrum of earth’s reflectivity is not flat.
(vi) Seismic wavelet is mixed phase.

Convolutional Model:

Fig. 4.8: A Convolution model.

The recorded seismic trace may be modeled as a series of interactions between the source
signature (a finite, band limited wavelet) and the earth. The convolutional model (Fig. 2.7) postulates
that the above wavelet is the superposition of several responses (the source wavelet, earth filter,
ghosting, multiples, instruments etc.) to form a complex pulse, which then convolves with the
reflectivity function to give the actual seismogram.

A seismic trace x(t) is given by the convolution of the basic seismic wavelet w(t) with the
reflectivity series r(t) plus random noise n(t).

X(t) = w(t) * r(t) + n(t)

The convolution model (Fig. 2.7) says that seismic trace X(t) is the convolution of seismic source
wavelet w(t) with earth’s reflectivity series r(t).
X (t) = w (t) * r (t) -------- (Time Domain)

X (f) = W (f) . E (f) --------- (Frequency Domain)

E (f) = X (f)/W (f)

e (t) = x (t) * w (t)-1 --------- (Inverse Filter)

Deconvolution (Mathemetical Presentation) :


The shape of the shot pulse is not known for land data because it isn’t recorded. Even if an
accurate representation of the particle motion of the shot pulse were known at the source, it is so
modified by travel through the weathered layer that is a poor estimate of the source wavelet actually
arriving at the receivers. The situation is further complicated by the superposition in time of copies of

BANARAS HINDU UNIVERSITY Page 43


2D Offshore Seismic Data Processing 2013

the source wavelet due to multiples. Deconvolution is the procedure to restore the source wavelet to
an approximation of the shot pulse and remove multiple copies from the trace. The former objective
is termed spiking deconvolution and the later is termed predictive deconvolution.
The objective of deconvolution is to remove the effect of the convolution of the basic wavelet
with the reflectivity, output seismic trace to be the reflectivity series. In practice it is to arrive at a
better estimate of the reflectivity function. In theory, we resolve the reflectivity r(t) from the
equation given below.

X (t) = s (t) * e (t) * r (t) + n(t)

Seismic trace (seismic wavelet) reflectivity random noise


Source System
Signature Wavelet

The basic seismic wavelet w (t) is actually made up of the convolution of source signature with
the propagation effects in the earth and the recording system sources.

In the frequency domain :

X (f) = S (f) × E (f) × R (f) + n(f)

Where X(f), S(f), E(f) and R(f) represent the amplitude spectra of the corresponding time
functions (ignoring the phase for now). We can remove the effect of the (S(f) ×E(f)) term in this
equation by making it equal to one (or any constant value). The function which has a constant
amplitude spectrum over all frequencies is a SPIKE.

The deconvolution operator is an inverse filter:


In the time domain, deconvolution involves finding an inverse of the wavelet which, when
convoluted with the seismic trace, output the reflectivity series. The seismic wavelet is converted to a
spike. (Fig.2.8)

Fig. 4.9: Spike conversion after Deconvolution of actual Seismic Response .

BANARAS HINDU UNIVERSITY Page 44


2D Offshore Seismic Data Processing 2013

Deconvolution Methods: Generally deconvolution fall into one of the following two categories –

(I) Deterministic deconvolution:


Deconvolution where part of the seismic system is known. No random elements are involved.
For e.g. where the source wavelet is accurately known we can do source signature
deconvolution. This is done when vibroseis is used as the source.
(II) Statistical deconvolution:
Statistical deconvolution is perform on seismic data to increase temporal resolution and
improve peg leg multiples. Deconvolution where no information is available about any of the
components of the convolutional model. A statistical approach is needed to derive
information about the wavelet (either ‘source’, ‘system’, or combined wavelets). Statistical
deconvolution is applied without prior application of deterministic deconvolution in the case
of land data taken with an explosive source.

Statistical Deconvolution is a process where we:


• Have no pre knowledge of the wavelet.
• Derive information about the wavelet from the data itself, specifically from the auto
correlation of the data.
• Make certain assumptions about the data which justify the statistical approach.
• Does not need to be used in conjunction with deterministic deconvolution.
Statistical deconvolution attempts to ‘spike’ the data and/or remove repetitive energy (e.g.
multipliers). ‘Spiking’ compresses the wavelet (by enhancing frequency content) but will never result
in ‘reflectivity’ series being output mainly because

• Limited bandwidth
• Assumption not valid. E.g. not minimum phase, noise not zero etc.

Statistical deconvolution can be of two types:

(A) Spiking Deconvolution.


(B) Predictive Deconvolution (Also ‘gap’ deconvolution).

(A) Spiking Deconvolution:


If the desired output is a zero-lag spike, then the deconvolution operation is called ‘Spiking
Deconvolution’. Spiking deconvolution is a type of predictive deconvolution in which the operator is
to predict and remove energy starting within a sample or two or the first zero crossing, after zero lag
value of autocorrelation. The effect of this type of filter is to convert the wavelet into as near a spike
as possible. Spiking deconvolution produces an amplitude spectrum which is flat or white over the
frequency range of data. It can be shown that spiking deconvolution is possible only for a minimum
phase wavelet. If wavelet is not a minimum phase only a delayed spike can be achieve. The solution
for a non delayed spike for a non minimum phase input is not stable.
For Spiking Deconvolution, one can write,

X (t) = w (t) * r (t)

We could use e(t) if deterministic deconvolution has been applied.

BANARAS HINDU UNIVERSITY Page 45


2D Offshore Seismic Data Processing 2013

We know that the ideal wavelet would be a spike-this would result in the seismic trace being a
good representation of the reflectivity function. This is achieved by applying spiking deconvolution.

(B) Predictive deconvolution:


Predictive deconvolution is used for elimination of multiples, ringing and reverberations, which
can be predicted by knowing arrival times of primaries involving same reflectors. Thus, this method
uses information from early part of the seismic trace to predict the multiples and reverberations and
then removes them. Hence, it is called Predictive Deconvolution. Thus, the deconvolution process by
which the desired output is a time advanced form of the input, is called Predictive Deconvolution.

Predictive deconvolution is based on two important assumptions:


• Earth’s reflectivity ‘et’ is a random series.
• Source wavelet ‘wt’ is minimum phase.
From the first assumption, it follows that autocorrelation of seismic trace represents
autocorrelation function of source wavelet. While from the second assumption, it follows that
autocorrelation can be used to define the shape of waveforms, the necessary phase information
coming from the minimum phase assumption.
Auto correlation: The result is a zero phase waveform with a maximum at zero lag. If two
waveforms are perfectly random then the auto correlation is a spike. Statistical deconvolution filters
(or operators) are most commonly derived from the auto correlation of the input data using Wiener-
Levinson algorithm.

Autocorrelation analysis: We can decay the point on our wavelet where our deconvolution
operator begins to operate - via the preduction ‘lag’ or ‘gap’.

If the predictive gap or delay is only one sample, we have spiking decon. Or in other words,
spiking decon may be considered as a special case of predictive decon where the ‘gap’ is one sample.

Parameters associated with Predictive Deconvolution:

(i) Design Window: The design window should include the target zone and omit any high
amplitudes or noise levels. It is common to omit the seabed, any coherent noise, and first multiple
bounce from the design window e.g. start the design window at 200ms for a seabed at 80ms. Longer
design windows are statistically more valid than shorter ones (assuming they don't just contain
noise). Generally a design window 10 times the operator length should be chosen. This would usually
be around 2s for a 200ms total operator (commonly used in the North Sea).

(ii) Operator Length: The deconvolution operator length is the sum of the “Prediction operator
length” (POL) and the “Prediction distance” (PD). Deconvolution operator length will have the most
effect on the degree of multiple suppression performed by the predictive deconvolution. Assuming
that the dominant multiple period is the seabed multiple then operator lengths less than the water
bottom (e.g. 100ms) will generally just perform spectral whitening/wavelet compression. Longer
operator lengths (e.g. water bottom + 60ms) will generally be effective at multiple suppression.
Operators longer than this may start to deconvolve geology. Deconvolution will generally have a poor
performance on multiples with periods greater than 300ms.

BANARAS HINDU UNIVERSITY Page 46


2D Offshore Seismic Data Processing 2013

(iii) Prediction Distance (PD) or Prediction gap: The part of the wavelet that we want to
preserve (the primary reflection). As indicated in the Fig. 2.9, the prediction distance is the delay
before the first multiple or equivalently the period of the reverberations. Prediction distance controls
the extent to which Deconvolution can compress the seismic wavelet. It must be remembered that
the choice of gap will affect the resulting amplitude spectrum of the data. A shorter gap will cause
more wavelet compression or spectral whitening and will boost any high and lower frequency noise
present. Deconvolved wavelets can have pulse breadths no shorter than PD. Thus in general, the
longer the prediction distance, the ‘milder’ the Deconvolution. “Spiking Deconvolution” is performed
with PD of one sample interval. As PD approaches unity, more contraction and consequently more
high-frequency noise is introduced. PD is to be chosen such that we get a good compromise between
resolution and signal-to-noise (S/N) ratio in the output trace.

nd
Fig. 4.10: Choosing a prediction length defined by the 2 zero crossing of the autocorrelation provides
fairly good preservation of the source wavelet while operating to remove the reverberations.

(iv) Percentage white noise: Compression of the seismic wavelet is also controlled by the
percentage white noise. The larger the percentage white noise, the less is the compression. It is
specified as a percentage of the total power in the signal. Increase in the percentage white noise
decreases the effect of Deconvolution. The amount of white noise to add will generally be in the
range of 0.1 % to 1%. The addition of white noise to data (auto-correlogram) during operator design
equalizes the amplitude of noise in addition to the signal.

Too little white noise may cause the decon operator to become unstable and decrease the S/N
ratio of the data. But, Too much white noise may decrease the effectiveness of the decon process
and narrow the bandwidth of data.

BANARAS HINDU UNIVERSITY Page 47


2D Offshore Seismic Data Processing 2013

Fig. 4.11: Deconvolution helps distinguish prominent reflections with ease (b). However, on a section without
deconvolution (a), reflections are buried in reverberating energy. (Yilmaz, 2008).

All these predictive decon parameters are fixed from running decon panels by trial and error
method. After all, one can clearly observe prominent reflections on seismic section (Fig. 2.10) along
with the frequency spectrum analysis for particular range (Fig. 2.11).

(a) Before deconvolution (b) After deconvolution

Fig. 4.12: Frequency Spectrum.

BANARAS HINDU UNIVERSITY Page 48


2D Offshore Seismic Data Processing 2013

(v) Decon panel : Operator length and amount of pre-whitening is decided by trial and error
method by applying different operator lengths and pre-whitening to a CDP gather. The values of
operator length and pre-whitening which yields the sharpest output is taken as the optimum and
deconvolution is applied to the data using these values.

2 SORTING:

The recorded traces have some characteristics in common. Hence, the traces can be grouped or gathered
according to these characteristics.
The CDP gather is the most widely used technique. Here, the shot and receivers are moved in such a way,
so that the same reflection point on the reflector is mapped more than once. Thus it is a procedure for
obtaining multi fold reflections. The point on the reflector is called the Common Depth Point, while the
shot and receiver locations for traces in a CDP gather have a common midpoint on the surface which lies
vertically above the CDP for horizontal reflectors.
The traces having the same midpoint locations are grouped together making up a CMP gather. It is
important to note that CDP gather is equivalent to a CMP gather only when reflectors are horizontal and
velocities don’t vary laterally. However, when there are dipping reflectors in the subsurface, these two
gathers are not equivalent and in such a case only the term CMP gather should be used.

Fig 4.13): Seismic data acquisition is done in shot-receiver (s, g) coordinates. The ray paths are associated with a planar
horizontal reflector from a shot point (indicated by the solid circles) to several receiver locations (indicated by the triangles). The
processing coordinates, midpoint-(half) offset, (y, h) are defined in terms of (s, g): y = (g + s)/2, h = (g − s)/2.

3 NMO CORRECTION:

NMO or normal move out is the difference between reflection arrival time at a geophone situated at a
certain distance from the shot point and arrival time at the geophone situated at the shot point i.e. at zero
offset. As offset increases, the seismic wavelet takes more time to arrive at geophone. This is not due to

BANARAS HINDU UNIVERSITY Page 49


2D Offshore Seismic Data Processing 2013

any anomaly in subsurface but due to additional distance travelled by the seismic wavelet. So a time
correction has to be applied according to the offset.

For the simple casee of single horizontal layer, the travel time equation as a function of offset is

T2(x) = t2(0) + x2/v2 ------------------ (eq. 5)

Where, x is the distance (offset) between the source and receiver positions. V is the velocity of the medium
above the reflecting
ecting interface. And t(0) is twice the travel time along the vertical path.

The NMO correction is given by the difference between t(x) and t(0)
ΔtNMO = t (x) – t (0)

So, ΔtNMO = t (0) {[1- (x / vNMO.t (0))2]0.5 -1} ------------------ (eq. 6)

Fig 4.14:: The simple geometry for NMO correction in single layer

Fig 4.15: NMO correction due to increasing offset.

BANARAS HINDU UNIVERSITY Page 50


2D Offshore Seismic Data Processing 2013

NMO STRECTHING:

While applying NMO corrections the trace undergoes a slight nonlinear stretch, which is called as NMO,
stretch. As a result of NMO correction a frequency distortion occurs particularly for shallow events and at
large offsets. Because of the stretched waveform at large offsets, stacking the NMO-corrected
NMO CMP gather
will severely
ely damage the shallow events. Muting the stretched zones in the gather can circumvent this
problem. The maximum permissible limit for stretch is 10% and a signal where more stretch is observed is
muted. It is quantified by,

Δf / f = ΔtNMO / t (0) ------------------ (eq. 7)

Where f is the dominant frequency and Δf is the change in frequency.

Fig 4.16: a) CMP gathers; b)- NMO corrected CMP gather; c) and d)-
d) muted using threshold stretch limits of 50 and 100%
respectively.(Yilmaz 2001)

4) VELOCTY ANALYSIS:

Velocity analysis is the most important and sensitive part of the processing. It is an interactive tool used to
interpret stacking or normal move out velocities on 2D and 3D pre stack seismic data. Velocity analysis is
done on common midpoint gathers.
gathers. Without velocity seismic section cannot be changed into depth
domain, which is necessary for interpretation. For applying NMO correction NMO velocity is needed.
Velocity analysis is performed on one CDP gather formed from a group of CDP points.

Methods
ds of Velocity Analysis:

There are several methods to do velocity analysis like constant velocity scans; constant velocity stacks
(CVS), velocity spectrum method and horizontal velocity analysis.
analysis Out of these methods, now a day’s
velocity spectrum method is most commonly used because it distinguishes the signal along hyperbolic
paths even with a high level of random noise. This is because of the power of the cross correlation in
measuring coherency.. The accuracy of the velocity is limited.

BANARAS HINDU UNIVERSITY Page 51


2D Offshore Seismic Data Processing 2013

1. CONSTANT VELOCITY STACKS (CVS):


(CVS)

Here we obtain a reliable velocity function by the best stack of signal. Stacking velocities often are
estimated from the data stacked with range of constant velocities on the basis bas of the stacked event
amplitude and velocity. A portion of the line of CMP gather, which had been NMO, corrected is stacked
with constant range of velocities. The resulting constant-velocity
constant velocity CMP stacks are then displayed as a panel.
Stacking velocities are
re picked directly from the constant-velocity
constant velocity stack (CVS) panel by choosing the velocity
that yields the best stack response at a selected event time. The CVS method is especially useful in areas
with complex structure; in such area this method allows the interpreter to directly choose the stack with
best possible event continuity .The constant-velocity
constant velocity stacks often contain many CMP traces and sometime
consist of an entire line.

2. VELOCITY SPECTRUM METHOD:

The velocity spectrum approach is different from the


the CVS method. It is based on the correlation of the
traces in a CMP gather, and not on lateral continuity of stacked events. This method, compared with the
CVS method, is more suitable for data with multiple reflection problems. It is less suitable for highly hig
complex structure problems. Suppose we repeatedly correct the gather using constant velocity values from
2000-4300
4300 m/sec, then stack the gather and display the stacked traces side by side. The result is a display of
velocity versus two-way
way time, called a “VELOCITY SPECTRUM”. ”. There are two commonly used ways to
display the velocity spectrum: power plot and contour plot.

Fig 4.17:: a) CMP gather, and two ways of displaying velocity spectrum computed from this gather: (b) gated raw plot,
and (c) contour plot.(Yilmaz 2001)

BANARAS HINDU UNIVERSITY Page 52


2D Offshore Seismic Data Processing 2013

3. HORIZON CONSISTENT VELOCITY ANALYSIS:

One method to estimate velocities with enough accuracy for structural and stratigraphic application is to
analyse the velocities of a certain horizon of interest continuously. Such a detailed velocity analysis is called
HORIZONTAL VELOCITY ANALYSIS. The velocity is estimated at every CMP along the selected key horizon of
interest on the stacked section. The principle of estimating the velocities by this method is the same as that
of the velocity spectrum. The output coherency values derived by hyperbolic time gates are displayed as a
function of velocity and CMP position.
One of the applications of horizontal velocity analysis is to improve the layered velocity variation along
marker horizon, especially if these velocities are used in post-stack depth migration.

Fig 4.18: Velocity picking by semblance method (Processing Tutorial , Paradigm Software)

5) BULK STATIC CORRECTION:

In marine seismic we are almost on the same datum level because in ocean we don’t face the topography
disturbance /elevation because we are on mean sea level

Elevation static correction accounts for the variable elevation of the sources and receivers by bringing them
to a common datum. Residual static corrections have to account for lateral velocity and thickness variations
of weathering layers. We cross correlate each trace of CDP gather with the stacked trace to get residual
statics. Residual statics are not done immediately after static correction because first we need velocity
information for residual static correction. Following the residual statics corrections, velocity analyses
almost always are repeated to update the velocity picks. Residual statics is applying on CDP gather.
There are two methods to estimate the residual static corrections:

BANARAS HINDU UNIVERSITY Page 53


2D Offshore Seismic Data Processing 2013

• By travel time decomposition.


• By stack power maximization.

6)) DIP MOVEOUT CORRECTION:

DMO or dip move out is the difference in time at two equal offset due to dip. It causes reflection
re point
smearing which is the moving out of reflection point for dipping reflectors. DMO is also called pre stack
partial migration or migration to zero offset. After DMO corrections events from midpoint gathers are true
zero offset dipping events. DMO resolves conflicting dips.
The DMO correction says that post stack migration is acceptable when the stacked data are zero-offset.
zero If
there are conflicting dips with varying velocities or a large lateral velocity gradient, a pre stack partial
migration n is used to attenuate these conflicting dips. By applying this technique before stack, it provides a
better-stacked
stacked section that can be migrated after stack. Pre stack partial migration only solves the problem
of conflicting dips with different stacking velocities.
velocities. Its applications are as follows:

1). Post-stack
stack migration is acceptable when the stacked data is zero-offset.
zero offset. This is not the case for
conflicting dips with varying velocity or large lateral velocity variations.

2) Pre-stack
stack partial migration or dip Move out provides a better stack, which can be migrated after stack.

3) PSTM solves only conflicting dips with different stacking velocities.

DMO is a partial migration process. The flanks of the non-hyperbolic


non hyperbolic trajectories have been moved up dip
just enough to make them look like zero offset trajectories, which are hyperbolic. As a result, each
common-offset
offset section after NMO and DMO corrections is approximately equivalent to the zero-offset
zero
section.

.19 : Reflection point smearing due to dipping reflector.


Fig 4.19

This smearing is detrimental for the stack output and hence this has to be removed for proper stack
response. Moreover, the presence of dips also alters the estimated velocities from the CDP gathers. Hence
a process called “Dip Move out (DMO) is applied on the gathers to remove the “smearing” and also the
effect of dips from seismic velocities. The stack generated from “DMO corrected gathers” is called a “DMO
stack” or simply a “Raw Stack” which is done
don after doing velocity analysis.

BANARAS HINDU UNIVERSITY Page 54


2D Offshore Seismic Data Processing 2013

7) CMP STACKING:

Stacking is a process of compression. It means algebraically summing the amplitude values at


corresponding times on all of the traces. The number of traces in the stack divides the sum and new value is
placed as amplitude.
The main advantages of stacking are as follows:
• It increases the signal-to-noise ratio. As during stacking the reflections are added in-phase and noise is
added out-of-phase.
• It reduces the amount of processing.
• CMP stacking helps to differentiate between the primary reflections and multiples from each other,
based on NMO, as multiples have a greater NMO than the primaries at the same time on a record.
Stacking doesn’t eliminate multiples but it attenuates or weakens them.

Fig 4.20: Final CMP stack.(Yilmaz 2001)

.
8) POST STACK PROCESSING:

The post stack processing sequence includes the following steps:

a) Deconvolution after stack(DAS) is usually applied to restore high frequencies attenuated by CMP
stacking. It also is often effective in suppressing reverberations and short-period multiples.

b) Time-variant band-pass filtering is then used to remove noise at the high and low frequency end of
the signal spectrum.

c) The basic processing sequence sometimes includes a step for attenuation of random noise
uncorrelated from trace to trace.

d) Finally, some type of display gain is applied to the stacked data.. Generally a slow time-varying gain
function that amplifies weak late reflections without destroying the amplitude relationships from
trace to trace that may be caused by subsurface reflectivity is applied.

BANARAS HINDU UNIVERSITY Page 55


2D Offshore Seismic Data Processing 2013

9) MIGRATION:

The oil traps are not simple stratified media as was considered in the previous assumptions undertaken in
data processing steps. Hence the dip in layers of the earth cannot be ignored, hence there are problems
associated with dip. CMP stacks are not perfect because of geometrical effects like dipping reflectors,
reflect
synclines and diffractions. To remove the effect of dipping reflectors there is a need of migration of seismic
data.

Fig 4.21: Migration principle

Migration is a process of moving the dipping events to their supposedly true sub surface
surf locations thereby
removing the effects of dip. It causes the diffractions to collapse and thereby improves the spatial
resolution. Migration always takes the events up dip.

DIFFERENT TYPES OF MIGRATION:


MIGRATION
The different types of migration based upon the domain in which the migration operates and the type of
data on which it operates (Stacked or unstacked) are as follows:

2D Post-Stack Time

3D Pre-Stack Depth

Fig 4.22 : Different types of migration(Processing Tutorial , Paradigm Software)

1. Pre-stack
stack Time Migration
2. Pre-stack
stack Depth Migration
3. Post-stack
stack Time Migration
4. Post-stack
stack Depth Migration

BANARAS HINDU UNIVERSITY Page 56


2D Offshore Seismic Data Processing 2013

Fig 4.23 : Different types of migration applied under different velocities and different geological structures.

Hence migration is done to:


• Shifting of dipping events to proper place.
• Collapses diffractions.
• It sorts the crossing events (unties the bow tie effect).
• Corrects the amplitude for processing.
• Preserves the wavelet amplitude, phase and frequency.
• Improves the spatial resolution.
• It gives more accurate velocity after pre stack time migration.
• Proper AVO after prestack time migration.
• Restores missing effects (offset regularization).

Fig 4.24): An unmigrated seismic section in which diffractions can be seen prominently from 2300ms to 2700ms.(Yilmaz 2001)

BANARAS HINDU UNIVERSITY Page 57


2D Offshore Seismic Data Processing 2013

Chapter 5
Case study
Survey Details

We were provided with marine data of Andaman Basin. Basic survey details are listed below:
• Sampling Interval: 4 ms
• Recording Length: 8 sec
• Shot interval: 25 metres
• Group Interval: 25 metres
• Streamer Depth: 10 metres
• Minimum Offset:150 metres
• Far offset: 8500 metres
• FFID range: 105540-106040
• CDP: 6200 - 7200

Processing Software Used

Processing software’s used in RCC, MBA, ONGC, KOLKATA are:

Paradigm – FOCUS (Preliminary Processing)


GEODEPTH (Advance Processing)

Geodepth software, a product of M/S Paradigm Inc. The 1990's witnessed an explosion in the use of
2D and 3D pre-stack time and depth migration for solving complex imaging problems. This
explosion was made possible by advances in computer as well as workstation-based performance
applications that allowed geoscientists to conveniently bring time and depth imaging technology
into the velocity model building process. The challenge of uniting migration technology with the
velocity model building process was led by GeoDepth, which emerged as the true standard for
improving the productivity of geoscientists tasked with solving depth-imaging problems in
geologic basins around the world.

Hardwares Used

Hardware’s used in RCC, MBA Basin, ONGC, Kolkata are:

Server – Sun Enterprises 4500 (No. of CPUs 4), Machine Brand Solaries 4.0
Server LLTIX 3700 (No. of CPUs 32), Machine Brand sgi (Main Server)

Total no. of work stations – 3

Total disk Space – 10 TB (SAN), Storage Area Network

BANARAS HINDU UNIVERSITY Page 58


2D Offshore Seismic Data Processing 2013

System Memory - 32*4 = 128 GB RAM; Hard Disk Capacity – 10 TB;

Redundancy – l, LINUX 05; Clock Speed – 1.5GB;

LTO Drive – 200 GB (Minimum);

EMC Symetrix Box – DMX 1000, 90 Disc 196 GB;

No. of Printers – 3; No. of Plotters – 2

All the system and terminals were connected with main the Server through the Network.

SECTIONS OF DIFFERENT PROCESSING STEPS

Section 5.1: Raw Gather Data

BANARAS HINDU UNIVERSITY Page 59


2D Offshore Seismic Data Processing 2013

Section 5.2: Muting and Filtering Workflow.

Section 5.3: Before and After application of TAR.

BANARAS HINDU UNIVERSITY Page 60


2D Offshore Seismic Data Processing 2013

Section 5.4: After application of Deconvolution.

Section 5.5 : Effect of Band-Pass Filtering.

BANARAS HINDU UNIVERSITY Page 61


2D Offshore Seismic Data Processing 2013

Section 5.6: Effect of f-k filtering on data.

Section 5.7 Spherical Divergence Correction before and after.

BANARAS HINDU UNIVERSITY Page 62


2D Offshore Seismic Data Processing 2013

Section 5.8 Model Building

Section 5.9 Velocity Picking.

BANARAS HINDU UNIVERSITY Page 63


2D Offshore Seismic Data Processing 2013

Section 5.10 Stack before and after Migration.

BANARAS HINDU UNIVERSITY Page 64


2D Offshore Seismic Data Processing 2013

Conclusion

This dissertation work demonstrates the processing of Marine Seismic Data, the associated noise
elimination techniques and also the acquisition of the Marine Seismic Data.

The data used in this dissertation is basically taken from “MBA BASIN”. The Figures attached here
show the hand on work to the input record using different processing modules.

In this dissertation work, I used different types of module to eliminate different types of noise such as
Water reverberations, Cable noise, Swell noise, mono-frequency noise, source signature noise, ringing
effect, etc. All these noisy events are very harmful for seismic data, so elimination is necessary. Here
high amplitude spike is eliminated by using despike module, mono-frequency noise is eliminated by
different band pass filters, source signature noise is eliminated designature module, the short period
multiple like ringing or singing effect are eliminated by using predictive deconvolution and lastly by
using Surface Related Multiple Eliminations (SRME), the all other reverberations associated with the
Marine Seismic Data were removed. Thus all above techniques enhanced the signal to noise ratio which
is prime objective of seismic data processing. And the stack section is used to see the best resolution and
continuity of reflectors so w get a good image of subsurface.

BANARAS HINDU UNIVERSITY Page 65


2D Offshore Seismic Data Processing 2013

REFERENCES
1. Dobrin, M.B. & Savit, C.H., (1988), Introduction to Geophysical
Prospecting, McGrawHills. Inc., New York, U.S.A. pp 115 – 141.
2. Sheriff, R.E. & Geldart, L.P.,(1995), Exploration Seismology, second edition,
Cambridge University Press, Cambridge.
3. Yilmaz Oz, (2001), Seismic Data Analysis, VOL I, II & III, Society of
Exploration Geophysics.
4. Kearey Philip, Brooks Michael, Hill Ian, An Introduction to Geophysical
Exploration, Third Edition (2002) by Blackwell Science Ltd.
5. De-noising of marine seismic data By Steffen Storbakk
6. OGP, International Association of Geophysical Contractors, An Overview of
Marine Seismic Operations, Report no. 448.
7. Hampson, D., 1987, The discrete Radon transform: A new tool for image
enhancement and noise suppression: 57th Ann. Internat. Mtg., Soc. Expl.
Geophys., Expanded Abstracts, 141-143.
8. Peacock K.L. & Treitel Sven, predictive Deconvolution:Theory and Practice:
Geophysics, Vol 34,No.2,( April 1969), pp 155-169.
9. Seismic Data Processing Tutorial of Robertson Research International
10. Seismic Data Processing manual of FOCUS software.
11. Sheriff R.E., Encyclopaedic dictionary of exploration geophysics,Third
edition ,SEG publication

BANARAS HINDU UNIVERSITY Page 66

You might also like