Q: Why Are Formations Managed Separately? Answer

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Q: Why are formations managed separately?

Answer:
Formation definitions are intended to be reusable and shared between many models, across a project or
a basin. When you begin a new session, you will notice that there are some formations automatically
created for you. These formations are intended as examples. We encourage you to look at how they
are defined before you use them.

One "Default" formation is usually your default choice, unless you choose another. Rather than modify
default formations, we recommend you create new formations with distinct names. New formations
will have the same default values as the original "Default" formation. Rather than build models with
different versions of the same formation, we recommend you create new formations for each version.
Distinct names will make it much easier to remember and recover earlier behavior.

Any model saved to OpenWorks will remember all formations as they were defined when that model
was created. To reuse custom formations, go to the Formation Manager and click on the green
plus sign with the tool tip "Load formations from OpenWorks model." This will load more formations
into the Formation Manager, in addition to any already loaded. If the saved version of a formation is
different from the version already loaded into the Formation Manager, then you will be asked which
version you prefer to keep.

___________________________________________________________________________
Q: What are the files in the .hvm folder created for a saved hybrid model?
Answer:
In this directory, there are always three files with the same names: field_sampling, regionfield, and
user_properties.
Rather than try to read these files directly, we encourage you to use the Velocity Modeling tool to
inspect them.
The field_sampling file is always present, and is human readable. The file contains a datum elevation for
which depth is zero, and it contains the minimum and maximum depths of interest. The file contains
model extents in CRS X and Y coordinates, using the database Project units. For X and Y, in also provides
a scale factor converting those X and Y distances to the same units as used for depth. This file also
contains a global minimum and maximum allowed vertical velocity and a maximum frequency in Hertz.
These three values allow various applications to place an upper or lower limit on sufficient sampling for
time/depth conversions and for seismic resolution.

The user_properties file is an ASCII file of optional metadata, saved by whatever application created the
hybrid model. No fields are required to be present, and the contents will vary from release to release,
with versioning. This file is not meant to be human readable.
The regionfield file is a compressed binary file that contains all data structures needed to reconstruct
the structural velocity model.
Instead of examining these files directly, you should look at the "Model Parameters Report" within the
Velocity Modeling application. Go to DecisionSpace Desktop -> Tools -> Velocity Modeling ... -> Load a
hybrid model from OpenWorks -> Select action: Load Model Properties. This table will give you a
readable summary of all information in these files.


__________________________________________________________________________________
Q: Why might I want to create a new seismic survey in OpenWorks?
Answer:
Under Tools -> Create New Seismic Survey, you can create a new Seismic 3D Survey table in
OpenWorks.
Seismic Surveys are a useful way to save and reuse a custom coordinate system. All 3D datasets in
OpenWorks are associated with a Seismic 3D Survey.
By convention the two spatial coordinates of a 3D survey are called "line" and "trace." Line and trace
are scaled, rotated, and regularly sampled from the CRS X and Y coordinates used by your project.

You may find new surveys useful for exporting sampled velocity models under Tools -> Volume
Exporter. That tool allows you to sample and export a velocity model to an OpenWorks brick file or to
shared memory. Brick files fully populate all lines and traces, so a custom survey will help you avoid
oversampling.
You may also find a new survey useful for importing ASCII seismic velocities under Construct Model ->
Seismic Velocity -> Velocity Source "ASCII File" -> Select... ASCII files are a popular way to exchange
data between different third-party systems. If your external ASCII file used a custom coordinate system,
then you can map it to the CRS X and Y by specifying values at three corners.

You can initialize all fields in this dialog from an existing 3D Seismic Survey, if you intend only to modify
sampling intervals or bounds. Or you can initialize from the Model Extents of an existing Hybrid Velocity
Model, to ensure that your bounds include all features of interest. The project's CRS X and Y are always
shown in the session's active coordinate system.

__________________________________________________________________________________
Q: How do I initialize a velocity with an analytic V0-K function?
Answer:
On the Formation manager select the Vertical P velocity for the formation of interest. There is a combo-
box for "Initial value source." Select "Analytic Function."

Many formations have a default constant in this field, but you can also use an analytic equation.
A common analytic function is a linear function of depth Z, in the form V0 + K*Z. (Slotnick, M. M., 1936,
"On seismic computation with applications": Geophysics, 1, 9--22.)

Let's say V0 is 1700 m/s and K is 0.5 /s. In that case you would enter the equation "1700 + 0.5*Z". The
depth Z is case-insensitive, and whitespace is ignored. Any illegal expression will be truncated when you
leave the text field.

Immediately below is a scale factor that will be used to convert your expression from active session
units to project units. You cannot edit this field. This value will be 1, if your active units are the same as
your project units. This ensures that your expression can be defined in your current active units, but
later opened to build the same model in a session with different active units.

The definition of the depth Z depends on the conformance for your formation, defined on the Define
Formation tab of the Formation Manager. If the conformance is "None" then depth is measured from
the datum elevation, as usual. If you select "Top Down from Water Bottom," then Z is zero at the water
bottom, and increases below. If you select "Top Down," then depth is measured from the shallowest
boundary of the corresponding formation. The two remaining types of conformance "Bottom up" and
"Proportional between Top and Bottom" are unlikely to be useful. (For "Bottom Up", the maximum
depth is constant at the deepest boundary of a formation, with the value of the deepest depth of that
formation. For "Proportional" the value of the depth is constant at the top and bottom of the
formation and stretched in between.)

__________________________________________________________________________________

Q: How do I control when boundaries are used for conformance or for different lithologies?
Answer:
On the Define Structure pane, there is a checkbox "Combine formation regions" with the mouse help
"Combine layers/regions with the same formation to prevent them from updating independently."
This is checked on by default, but models loaded from earlier releases will uncheck this box for previous
behavior. We encourage you to leave this box checked.

When this box is checked, all layers that share the same formation will be treated as a single continuous
layer. Within this layer, any internal boundaries will still be used for proportional conformance, which is
the default for most formations. If you use any other type of conformance, then these internal
boundaries will be ignored entirely. If the box is unchecked, then every layer will be updated
independently, and velocities will vary discontinuously at all boundaries. This option affects models from
horizons and from frameworks.
Layers should be assigned to different formations only when they belong to lithologies that vary
independently and discontinuously, such as limestone, salt, and water. Additional formations may be
more appropriate in "hard rock" areas where velocities are determined more by lithology than by
pressure.

When a structural boundary has a sharp jump due to faulting, then any velocities conformal to that
boundary will show the same jump. The default conformance of the formation can be changed to avoid
using this boundary. You may want to omit the horizon entirely as a structural boundary, if you do not
expect a sudden change in lithology. You can still use such horizons separately for tying surface picks.

__________________________________________________________________________________
Q: How can I force my velocity model to fit picks exactly?
Answer:
On the Well Surface Picks pane, we have added a new button "Ignore physical constraints to minimize
pick errors" with the mouse help "Make final unconstrained adjustment of velocities to fit remaining
pick errors. Ignores structure, conformance, resolution, and bounds"

This option will force an exact tie of all picks to their time horizons, with final residual adjustments to
interval velocities between those horizons. The velocity model is first optimized as usual by honoring all
formation constraints, such as velocity bounds, smoothness, and conformance. This model is used to
calculate remaining errors between time horizons and picked depths, measured from zero depth at the
datum. This error is converted into a residual velocity correction between each picked horizon. The
residual correction is interpolated spatially over X and Y using four nearest picks weighted by
their reciprocal distance. The final residual velocity field is overlain on the constrained velocity field by
summing slownesss. Any time structure in the original constrained velocity model is updated
accordingly, and the process is repeated.

With this correction, isolated bad picks will produce isolated velocity anomalies that should be easy to
spot with depth or time slices of the velocity model. This option is not checked on by default. We
encourage you to build a constrained model first, then look for your largest errors in the Model
Parameters Report, and in the Pick Details for the Well Surface Picks.

But if your constraints are still resulting in modest errors with good picks, then you may find this option
useful to guarantee that the velocity model will convert time horizons to depths that exactly tie your
wells.

__________________________________________________________________________________
Q: Why does Velocity Modeling not use the same interpolation methods as Depth Team Express?
Answer:
Depth Team Express models are a gridded cube of depths as a function of vertical traveltime. They are
constructed by the spatial interpolation of time-depth pairs from well data and seismic velocities.
Velocities are computed from these time-depth cubes by vertical differentiation.

These models have serious limitations. All times are measured along vertical paths and begin at the
same depth. They cannot model data with sources that are not directly above receivers. Vertical times
cannot model most checkshots, offset VSPs, horizontal wells, PreStack move outs, or any anisotropic
changes with angle.
Users are obliged to determine the extrapolation of time-depth curves above and below wells. Users
must correct for source offsets with crude cosine factors. Interpolation over time cannot take
advantage of structural constraints in depth, such as frameworks, salt bodies, or depth horizons.

Velocity Modeling uses a structural framework in depth and specifies velocities at every physical
coordinate. Times-depth curves are computed from velocities rather than the other way around.
Instead of interpolation, Velocity Modeling uses ray tracing and tomography.

But we can use these models for much more than vertical time-depth curves and stretching. Ray tracing
can use arbitrary source-receiver combinations and allow velocities to vary anisotropically with angles.
Finally we can use source offsets correctly for typical checkshots and offset VSPs. Structural frameworks
let us avoid the aliasing and excessive memory requirements of sampled grids. Velocities can be
constrained directly, rather than indirectly. Processing can share these models for prestack imaging, and
finally take advantage of well constraints.

__________________________________________________________________________________

Q: Why are models aligned north and south instead or oriented with a particular survey?
Answer:
Actually, they are not.

Hybrid structural velocity models are not tied to a single survey. They contain input data with different
alignments, depending on their source. You can use multiple horizons from different surveys. Each
horizon that originates from a single survey is retained at its original orientation to preserve resolution.
But you can also combine horizon data from 2D and multiple 3D surveys for a single horizon. In those
cases, we regrid with a best compromise.

Structural velocity models do have maximum model extents that are aligned north and south, but this
does not affect the size, internal bounds, or orientation. Structural velocity models are not regular grids,
but a structural framework that contains conformal grids and other basis functions with different
alignments and sampling. The model extent determines a bounding box beyond which velocities are
certain not to vary.

We do have an upgrade request (PS# 891219) to allow the user to adjust the bounds of what is
displayed of a velocity model in the desktop. This would not affect the actual construction of the
velocity model. Ideally, visible bounds would be a polygon, not a rotated rectangle.

You can sample and export a velocity model with the orientation of a specific survey, using the Tools ->
Model Exporter. To define a new survey, use Tools -> Create New Seismic Survey.

Hybrid models may be as large as a basin, with highly variable structural detail within, depending on
data used to construct it. A hybrid model constructed entirely from 2D data may only provide useful
time/depth conversions on the surveyed lines, with no detail or interesting features elsewhere.

__________________________________________________________________________________
Q: Why might I want to change the default values for spatial and vertical smoothing?

Answer:
Within a single formation, you do not want your velocity model to have more detail than can be justified
by your data.

If a very smooth velocity model fits all your data well, then there is no reason to reduce smoothing. In
fact you may want to increase this smoothing instead. If your wells are at least 500 m apart, then there
is no reason to reduce spatial smoothing below 500 m. The data cannot provide any more detail.
If your formation is a massive limestone you may want to increase the smoothness as a geologic
constraint. Similarly if your velocities are dominated by pressure, you may want to impose more
smoothness as a physical constraint. Salt and water are obviously more extreme cases.

Similarly, you want your vertical smoothing to be guided by the density of checkshot sampling in your
Time-Depth Curves. If you are fitting picks only -- without any T-D curves or seismic velocities -- then
you may want to increase the default vertical smoothness significantly. On the other hand, if your
T-D curves were integrated from sonic logs, then you may want to reduce smoothness to preserve that
detail. If your T-D curves are detecting rapid changes that are due to sharp changes in lithology, then try
to add horizons to your Layer Stack so that the velocity model can fit this change with a discontinuity.

Model building uses a multi-scale optimization. This means it first updates the smoothest parts of the
velocity model, then iterates to fit remaining errors with less and less smoothing.

Reducing smoothness should have very little effect on your results if the current smoothness already fits
your data. On the other hand, reducing the smoothness will still increase the effort of building your
model -- increasing both the CPU runtime and the memory required.

For larger models, we recommend fitting your data once with very smooth formations, for a quick
preview. Look at the details of your well data to see if remaining depth errors are significant. Reduce
smoothness only in formations where these errors are significant.

You may also want to reduce smoothing if you are initializing a formation with external seismic velocities
-- particularly if these velocities came from tomography or full-waveform inversion.

Default smoothnesses are based on the minimum velocity of a formation and on seismic bandwidth.
(The default vertical smoothness is four spatial wavelengths of the maximum frequency traveling at the
slowness possible velocity. The default horizontal smoothness is eight times the vertical smoothness.
Both can be reduced by hand to a single minimum wavelength -- which is comparable to the resolution
of full-waveform inversion.)

Formation smoothness is not equivalent to a sampling rate. Hybrid models contain structural
frameworks that allow discontinuities with no blurring or sampling artifacts. Formation smoothness
affects only the interior of layers. These layers use smooth conformal functions that are not simple
grids.

Resolution, specified as frequency content, is also not equivalent to accuracy. Resolution simply
determines the resolving power of your data. Travel-times typically can only be picked to within a
quarter of a wavelength in milliseconds. Most visibly, resolution determines the default smoothnesses
on each formation. More importantly, resolution sets a minimum value (lower bound) on this
smoothness anywhere. Resolution avoids unnecessary expense when sampling and integrating times
and depths. Asking for more resolution than necessary will only waste computational resources and not
improve the quality of the results.

_____________________________________________________________________________________
_________________________
Q: Why does my model have such large velocity changes at layer boundaries?
Answer:
First of all, make sure that your layers correspond to distinct formations with significantly different
lithologies and velocities. Do not add layers just for the conformal behavior. Velocities are allowed to
vary discontinuously at layer boundaries. Within a layer, velocities must vary much more smoothly,
according to the smoothness in the Formation Manager. As a result, large jumps may occur at formation
boundaries to fit abrupt changes in time-depth curves.

Even so, changes in velocities at layer boundaries may be larger than expected. The "interval velocity"
shown in the details for a time-depth curve is the difference in depths between two points divided by
the difference in times. If a layer boundary falls between these two points, then a larger adjustment
may be made to the velocity on one side of that boundary, to get the correct adjustment of the full
interval.

Try building your velocity model first with only your time-depth curves, without any layer boundaries or
picks. Check the time-depth curve details for depth errors that swing abruptly from positive to negative
values. These may be horizons at which you have a substantial lithology change that justifies a
discontinuous change in velocities. Add only those layer boundaries that do indeed improve the fitting
of your time depth curves. In some cases, you may be satisfied merely reducing vertical smoothness in
the formation, so that velocities can change more rapidly without a discontinuity. After you are satisfied
with this model, then include any surface picks. These picks are fit simultaneously with the time-depth
curves, but picks are always fit alone in a final iteration. Picks are always considered harder data than
time-depth curves, when the two are in contradiction. In such a case, you may see a positive or negative
bias in the depth errors for time-depth curves, after you begin including surface picks. (If picks require
slower velocities than do time-depth curves, then estimated depths for curves will be consistently
too shallow.)

With or without curves, depth errors for surface picks should be positive or negative in equal numbers,
on average. If not, then the formation may be constrained with too fast a minimum velocity, or too slow
a maximum velocity. If nearby wells are showing large errors with opposite signs, then you may need to
reduce spatial smoothness for the overlying formation(s).

(Fitting only picks without wells should produce large intervals of mostly constant velocities between
picked surfaces.)

-------------------------------------------------------------------------------------
Q: Why do my time/depth curves have such large errors? Why are my velocities not as stratified as I
expect, even with just one time-depth curve?

Answer:
Look at the details of your time-depth curve. If the errors in estimated depth are significant, then you
may be over-constraining your velocities. When velocities cannot change sufficiently in the vertical
direction, then they will sometimes change unnecessarily in other directions.

If depth errors are consistently too shallow or too deep, then increase the range of allowed velocities,
particularly on the fast end. Even if you think your interval velocities should not go outside the range,
smoothness constraints may oblige them to overshoot.

If the depth errors are changing signs frequently, then you may need to reduce your vertical
smoothness. First increase to full resolution (60 Hz), if you have not already. If you still see large,
rapidly changing errors in your time/depth curves, then you may be attempting to resolve beyond
seismic wavelengths. This is likely if you have constructed your time/depth curve from a blocked sonic
log. If so, reduce the default values of vertical smoothness for the affected formations. When possible,
add horizons as layer boundaries where you expect a discontinuous change in velocity.

If you have wells very close together with incompatible time/depth curves, then you may need to
reduce spatial smoothness as well. If faults are involved, then consider adding those to your framework.

If your well borehole is not vertical, then horizontal changes may fit some data more easily than vertical
changes.

If your model includes multiple layers belonging to distinct formations, then you will have lateral
changes due to changes in the depth of the layer boundaries.

-------------------------------------------------------------------------------------
Q: How is resolution determined? Why do I need to know the maximum frequency?
Answer:
Note: This information has been incorporated into the online help topic, Relationship between
Frequency and Resolution. This topic is linked to the Geometry/Resolution help topic.
(Frequency_and_Resolution.htm)

First, notice that resolution is not the same thing as accuracy. A time/depth conversion can be accurate
to within a meter while resolution may be limited to a thousand meters. Resolution determines how
rapidly a velocity should change within a formation, not the precision of those values. It is useful to
distinguish continuous from discontinuous changes in velocities. We can model any discontinuities in a
velocity model with formation boundaries, using a structural framework. Velocities on each side of a
formation boundary can vary independently. The discontinuity itself retains perfect resolution and does
not require dense sampling to retain. Seismic reflections help us resolve the locations of such
discontinuities.

Within a formation, we assume velocities are continuous and resolved only by the transmission of
waves. Transmission information is captured by picked traveltimes, moveout analysis, traveltime
tomography, migration velocity analysis, and waveform inversion.

Within a formation, seismic resolution ultimately depends on seismic wavelengths, the physical distance
over which a propagating wave completes a full cycle of displacement.

We assume ray-based methods of traveltime tomography resolve velocities to no better than four times
a typical spatial wavelength. More rapid changes violate the ray assumption and result in wave
phenomena that are not captured by picked traveltimes.

In the best case, waveform tomography resolves transmission velocities one the order of one spatial
wavelength. Any finer detail begins to show scattering and reflection.

The spatial wavelength depends on two other quantities, the seismic bandwidth measured in Hertz, and
the seismic velocity in the formation. The faster the velocity, the longer the spatial wavelength. A
typical spatial wavelength is approximated as 2*vmin/fmax, where vmin is the minimum velocity
in a formation and fmax is the maximum frequency.

The formation manager allows the user to constrain a minimum velocity for each formation. If this
value is not specified, then the default minimum velocity for the entire model is used
from the Geometry/Resolution tab.

The Geometry/Resolution tab also allows you to specify a global resolution for the model. A default
value assumes a maximum useful seismic frequency of 12 Hz. This value is deliberately
low, so that formations will be described with very smooth default behavior. Velocities can be updated
quickly, with a small memory footprint. We recommend this choice only for
previewing results.

A medium resolution, assuming a maximum frequency of 30 Hz, is more likely to be useful for most
models.

Full resolution, 60 Hz or higher, is recommended when formations are initialized with external datasets
containing tomographically or full-waveform inverted velocities. Ray tomography and checkshots do
not generally capture information at this resolution.

We attempt to estimate reasonable default smoothness for each formation used in a velocity model, but
allow the user to revise as necessary. Default values are shown based on the current active model and
resolution. Vertical smoothness defaults to higher resolution than lateral directions. The user is still
allowed to reduce these values down to the minimum spatial wavelength.

Very smooth formation velocities may be appropriate if velocities are estimated only from sparse well
data. Dense wells do not necessarily require rapid changes in velocity, if their time picks are consistent.
Examine the details of a time/depth table, or formation top picks, to see if errors are larger than
expected. If nearby picks show both positive and negative errors, then velocities should probably be
allowed to change more rapidly.

-------------------------------------------------------------------------------------
Q: How are velocities calibrated between wells? How does this differ from DepthTeam Express?

Answer:
The short answer: DepthTeam Express interpolates depth errors as a function of vertical traveltime;
Velocity Modeling estimates a velocity model that fits all known time measurements. The approaches
are fundamentally different.

Velocity modeling describes a velocity model as a hybrid of different parameters. A structural
framework divides the model into disjoint compartments. Different compartments can belong
to different formations with different constraints and smoothness. Our velocity model is optimized with
a low-memory optimization algorithm similar to that used for traveltime tomography. Traveltimes are
integrated through the velocity model between known endpoints -- taken from the geometry of
formation picks and time/depth curves. Errors in these traveltimes are backprojected onto the
constrained velocity model to update and improve those velocities. When time horizons are included in
the framework, then these are adjusted in depth, as part of an iterative relinearization. (The full
algorithm is a multi-scale iterative Gauss-Newton optimization.)

In early iterations, when a starting model is expected to be far from optimum, the velocity model is
updated with much smoother perturbations. The first iteration makes only constant velocity
adjustments independently to each formation, to remove any overall bias of the model as too fast or too
slow. The next iteration uses perturbations that are smooth over the entire span of the model, in all
spatial directions, adjusting only very smooth regional trends. With each subsequent iteration, these
smoothing distances are reduced geometrically, until the final iterations are at the full resolution
allowed by each formation. If early smooth perturbations have successfully explained all remaining data
errors, then later iterations will have no effect. Detail is not introduced unless necessary to explain the
data.

Because formation picks are generally more reliable than time-depth curves and checkshots, we
perform a final optimization using only formation picks. If the two varieties of data contradict each
other, then picks will prevail.

It is important that users provide useful constraints on the velocities of a formation when possible, to
avoid unphysical detail that might be introduced by spurious data. Minimum and maximum velocities
should make physical sense. A massive limestone should be given a greater degree of smoothness than
shale.

-------------------------------------------------------------------------------------




Q: What is the difference between how DepthTeam Express and Velocity Modeling fit well picks?

Answer:
In a sentence each:

DTE updates depth as a function of time by interpolating depth errors.
VM updates velocity as a function of depth by tomographically inverting time errors. We can
label these errors on a cartoon in depth, containing one well and one horizon converted from
time. The horizon needs to move to a nearby well pick.
DTE measures the vertical distance from the horizon to the pick -- that is the depth error. The
measurement is local.
VM measures the time from the surface to the pick and compares to the desired time -- that is
the time error. The measurement is from the surface datum.

These measurements are used to update different models:

DTE models depth as a function of x,y, and time: z(x,y,t).
VM models interval velocity as a function of x,y, and depth: v(x,y,z).

What are the methods?

DTE uses interpolation of the depth errors. Corrections are local and fit each data point
independently.
VM uses tomographic inversion of times. This is a global iterative optimization, fitting all data
simultaneously.

Why are they not equivalent?

DTE computes interval velocities from interpolated depths. It is harder to avoid unreasonable
velocities.
VM only allows physical updates to interval velocities, with constraints. Physical velocities make
better predictions between and below wells.

Why the difference in algorithms?

DTE: depth errors are local and do not overlap. You can adjust each horizon independently.
VM: Times are measured from the surface and overlap. Velocity is a rock property with physical
constraints.

Do they handle time/depth curves differently?

DTE: T/d curves are interpolated first to depth tables, before interpolating depth errors from
picks. The two interpolations are independent and implemented differently.
VM: Each sample of a t/d curve measures a time to an expected depth, just like many picks. VM
first fits all data simultaneously, and then ensures that picks are fit best with a final update.

You might also like