Friday, January 26, 2018

Seismic Structural Imaging

Ask a geophysicist for a simple definition of structural imaging and you might get an analogy like this, echoing the parallel between optics and acoustics. But in direct terms, structural imaging boils down to this:  the branch of seismology in which processed seismic data undergo additional passes to create a large-scale picture of the subsurface. Its goal is to provide a map to locate traps and plan a drilling strategy for optimal drainage. 

Structural imaging lays the foundation for other seismic techniques that investigate progressively smaller features. After structural imaging, seismic stratigraphy jumps to the next level of detail, characterizing the arrangement of layers within rock formations. Next, lithostratigraphic inversion attempts to describe lithology of individual rock layers and evaluate properties and distribution of pore fluids, through analysis of variation of seismic signal amplitude with spacing between source and receiver, called offset. The quality of these finer-scale techniques rest largely on the quality of structural imaging. 

Today, structural imaging is advancing on two fronts. One is improving image quality of conventional structures-their position and shape become known with greater accuracy. The other is the abilty to image areas of more complex structure associated with large, rapid changes in velocity. Examples are low-velocity layers, structure below salt or gas, multiply folded and faulted formations. Understanding of these difficult settings promises to better quantify reserves in established fields and help define new prospects that eluded more conventional approaches.  

In all but the simplest geologic settings, imaging with seismic energy has three fundamental problems: the image starts out blurry, has the wrong shape and is in the wrong place. These problems are  caused mainly by refraction -bending of rays as they pass through rocks in different velocity- and diffraction of seismic energy as it passes through rocks of different velocities, shapes and thicknesses. To make the image interpretable, seismic energy must be focused into a sharp, correctly shaped image, and the image moved to the correct lateral and vertical position. The sharper the image of a structure and truer its shape and position, the more accurately the structure can be evaluated and drilled. The method of sharpening, shaping and relocating images is loosely termed migration, often considered synonymous with structural imaging.

Mathematically, migration is performed by various solutions to the wave equation that describe the passage of sound through rock. Called migration types, these numerous solutions or algorithms often take the name of their authors (Gazdag,Stolt) or the type of solution (finite difference, integral).
Migration types may be thought of as a family of tools, each with shortcomings and advantages. Choice of the optimal type is not always obvious and relies on the experience of the seismic practitioner.

Types of migration are applied in a broader category of migration called class: poststack or prestack, two-dimensional or three-dimensional (2D or 3D), time or depth. These classes form eight possible combinations. The trend in imaging is from postack to prestack, 2D to 3D and time to depth migration. This trend is best understood by examining the strengths and weaknesses of these classes. 

Poststack migration, still the most common form, assumes the section built of stacked traces is equivalent to a zero-offset section, meaning each trace is made as if the source and receiver are coincident. Chief advantages of poststack migration derive from stacking: compression of data, removal of multiples and other noise, and fast, inexpensive processing.  Postack migration holds up even in fairly strong lateral velocity variation, stacking breaks down and prestack processing is required. A limitation of stacked data is its removal of true amplitude information.

Prestack imaging is done on unstacked traces, taking 60 to 120 times longer than postack imaging, but with potential to retain amplitude variation with offset (AVO) and phase changes useful for later analysis. Prestack time migration is preferred when two or more events occur at the same time but with different stacking velocities.  Prestack depth migration is advantageous when velocities in the overburden or the target are complex, but requires large computer resources and remains rare. However, as massively paralel computers become more widely available, migration technology will shift toward prestack depth migration.

In 2D migration, energy reflected only in the plane of the section is correctly imaged, whereas 3D migration uses energy from both in and out of the plane of the section. In general, 3D migration will have higher resolution because it can move energy from outside the plane back to its correct position. The cost of higher resolution, however, is  greater acquisiition costs with longer processing time - what takes days in 2D might take weeks in 3D.

The choice of 2D or 3D migration is first determined by acquisition geometry. Data acquired with a 2D scheme- a single acquisition line with shot and receivers in a line - can only be 2D migrated, but 3D data can undergo 2D or 3D migration. An inexpensive, fast approximation to 3D migration is 2D migration in orthogonal directions, called two-pass 3D migration. It is strictly correct only for a constant-veloctiy earth, but errors are small if the vertical velocity gradient and dip angle are small.

Time and depth migration differ on several levels. In simplest terms, time migration locates reflectors in two-way travel time - from the surface to the reflector and back as measured along the image ray- whereas depth migration locates reflectors in depth. A migrated seismic section with a time axis, however, is not necessarily a time migration. A depth migration may be converted to time. This is sometimes done to compare velocity modeling for depth migration with velocity assumptions used for time migration.

The significant difference between time and depth migration is the detail with which they view the behaviour of sound in the earth. To time migration, the earth is simple in both structure and velocity; to depth migration, it may be complex. Time migration assumes negligible lateral variation in velocity and therefore hyperbolic moveout. Using only stacking velocity , or some approximation, time migration can make sharp and correctly shaped and positioned images - if structure and velocity are generally simple. Time migration can handle some complexity of structure , but only limited variaton in velocity. 

When velocity and structure become obviously complex, rays are bent, producing nonhyperbolic arrival times and distorting the resulst of time migration. Reflectors become blurry, move to the wrong place or become too long. Accounting for this ray bending requires what is called a macro model - a model of velocities between reflectors. This is needed mainly to eliminate lateral positioning error caused by refraction, but also to sharpen the image. Construction, revision and verification of this macro model are the goals of depth migration, are the main contributors to its difficulty and cost. Depth migration is also more sensitive to errors in velocity than time migration.



Picture above: Visualizing a North Sea salt diapir structure with 2D time migration, depth migration and depth migration output in time. Colors denote the interval velocity field determined prior to depth migration. The velocity model permits a more certain migration by including lateral variations. In the posstack time-migrated section, reflectors are poorly defined, on both the flanks and base of the diapir. Definition improves in the prestack depth migration, performed with the velocity field shown. Finally, for comparison between the prestack depth and time migrations, the depth migration output is converted back to time, revealing a marked improvement in definition of the diapir flanks and base.

 Although time migration handles only simple problems exactly, it remains the dominant technique in exploration, and usually lays the foundation for the depth migration macro model. In the production setting, some operators prefer to strecth time migration to the limit before jumping to depth migration. Still, depth migration has become a valuable tool because it is the only one that can handle the most difficult problem in imaging - strong, rapidly varying lateral changes in velocity. Depth migration also remains the focus of most imaging research. 

When is depth migration needed? Or, to put it another way: When is the velocity field complex enough to mislead time migration? For many operators, a step before depth migration is a processing step to convert time-migrated section to depth using image-ray depth conversion. This procedure uses an image ray, which is shot downward perpendicular to the surface and is bent by an amount predicted by Snell's law applied to the velocity model. The ray passes through the correct lateral position of an event, which in the time migration would appeear vertically below the starting point of the ray. If the image ray strikes the reflector a considerable lateral distance from the starting point, velocity variation may be interpreted to be sufficient to warrant depth migration.



In many areas, the decision to use depth migration usually arises when imaging the near-vertical flanks of salt domes. In this setting, BP tests the macro model with ray tracing. if the depth or lateral position of the image is not displaced far enough to affect well placement, then BP does not bother with depth migration. To preserve steep-dip events, BP is careful not to finish processing with a low-cut filter (5 to 40 Hz). In removing noise, the filter may inadvertently erase low-frequency, steep-dip events. 

Selection of the approriate type and class of migration is only half the story of imaging, however, and probably the less important half. The main concern in imaging is the engine that drives the depth migration algorithm - the velocity macro model. 



The macro model is a numerical description of the subsurface on the scale of hundreds of meters. It contains either two-way travel time or depth to the main reflectors and the velocities and densities between them. It describes the acoustic propagation characterics of the subsurface and is used for depth migration to include ray bending. In other words, the macro model functions as the air traffic controller for the migration algortihm. It tells the algorithm how far to move the reflector - up a little, down, to the left or righ, or hold the line. Inadequate knowledge of velocity results in the image being under- or overmigrated, misplaced or blurred. Even the most advanced migration algorithm will fail to focus the image if directed by a flawed macro model.


Techniques of macro model building are controversial, proprietary and fast evolving. It remains the weak link in the depth migration, so attention today focuses on increasingly sophisticated ways to model, update and verify velocity for depth migration. The cost and computation time of these techniques increase with their capability to handle large, rapid changes in velocity.


A starting point is stacking velocity, obtained for conventional time processing. Stacking velocity is calculated from the difference in arrival time at different offsets, assuming the layers over the reflector have a constant velocity. Stacking velocity values are often inaccurate because they are an average over a large volume of rock, which often has nonuniform velocity. Stacking velocities may be constrained with data from sonic logs or previous seismic data. Still, stacking velocities may provide the best, first macro model. 


A first-guess model of the earth is usually developed by picking main reflectors from a poststack time migration. Times are assigned to each reflector and velocities to intervals between reflectors. A first order approximation is the computation of constant velocity values between reflectors. Now, workstations readily permit estimation of velocity gradients vertically between reflectors and sometimes horizontally along the event. The macro model may then be used to make a synthetic seismogram based on part of the model, which is iteratively modified until the synthetic matches the measured seismic data. 

















No comments:

Post a Comment