Friday, November 2, 2018

Coiled Tubing Takes Center Stage

When it comes to coiled tubing, there can be few doubters left. What was once a fringe service has moved to center stage in the oilfield theater of operations. 

For many years, coiled tubing (CT) operations occupied the twilight zone of a fringe service offering niche solutions to specialized problems. However, over the past five years, technological developments, improved service reliability, gradually increasing tubing diameter and an ever growing need to drive down industry costs have combined to dramatically expand the uses of coiled tubing.

Today for example, coiled tubing drills slimhole wells, deploys reeled completions, logs high-angle boreholes and delivers sophisticated treatment fluid downhole. This article will look at the technical challenges presented by these services and discuss how they have been overcome in the field. 

 Drilling Slimhole Wells

Slimhole wells - generally those with a final diameter of 5 inches or less - have the potential to deliver cost-effective solutions to many financial and environmental problems, cutting the amount of consumables needed to complete a well and producing less waste. Other benefits depend on what kind of rig drills the well. Compared to conventional rigs, purpose-designed smaller rotary rigs can deliver slimhole wells using fewer people on a much smaller drillsite, which cuts the cost of site preparation and significantly reduces the environmental impact of onshore drilling.

Coiled tubing drilling combines the virtues of a small rig with some unique operational advantages, including the capability to run the slim coiled tubing drillstring through existing completions to drill new sections below. There is also the opportunity to harness a coiled tubing unit's built-in well control equipment to improve safety when drilling potential high-pressure gas zones. This allows safe underbalanced drilling- when the well may flow during drilling.

Although there were attempts at CT drilling in the mid-1970s, technological advances were needed to make it viable. These include the development of larger diameter, high-strength, reliable tubing, and the introduction of smaller diameter positive displacement downhole motors, orienting tools, surveying systems and fixed cutter bits. Furthermore, currently available coiled tubing engineering software enables important parameters to be predicted, such as lock up -when tubing buckling halts drilling progress- available weight on bit, expected pump pressure, wellbore hydraulics and wellbore cleaning capability.

Through-tubing reentry in underbalanced conditions is a category of CT drilling that may grow significantly. Reentering wells without pulling the production string is a cost-effective way of sidetracking or deepening existing well.

The development of through-tubing, reentry underbalanced drilling is of great interest in the Prudhoe Bay field on the North Slope of Alaska, USA, where operator ARCO Alaska Inc. has an alliance with Dowell to develop coiled tubing technology. The alliance has already scored a number of technical and commercial successes. For example, a 600-ft horizontal section extended using underbalanced CT drilling, resulted in production three times greater than predicted rates.

As with any mature operation, there is a need to extend field life and gain incremental reserves at a cost that reflects today's oil price. While the primary aim is to devise a strategy for for low-cost well redevelopment, a secondary aim is to improve the productivity of horizontal wells by reducing formation damage associated with conventional overbalanced drilling.

Thursday, November 1, 2018

Inversion for Reservoir Characterization

Fundamental to reservoir characterization is assigning physical property values everywhere within the reservoir volume. The challenge of using all available data to choose the best assignment is being addressed by a group of scientist. Available data could include seismic data, log data, well test results, knowledge of the of the statistical distributions of the sizes and orientations of sedimentary bodies, and even spesific information about reservoir geometry. 

To incorporate all these diverse sources of information, the scientists use an inversion method that begins by considering all possible assignments. Each assignment is represented by a single point in a multidimensional space that has as many dimensions as there are cells in the reservoir model. In assigning acoustic impedance in a reservoir model comprising 10 x 10 x 10 discrete cell, for example, each assignment would be represented by a unique point in a 1000 dimensional space. 

 The available data are then used to determine which of these points are acceptable. This is achieved by representing each available data set -3D surface seismic data, well data, or whatever -by a cloud of points corresponding to assignments that fit that particular data set. Finding an acceptable assignment then reduces to finding a point that lies at the intersection of all such clouds of points.

As the solution is always nonunique ( more than one assignment satifies all the available information), this intersection set will not be a single point but have some volume in the multidimensional space. A procedure to choose a single, best assignment is therefore required. The current method starts with an initial guess and then modifies it as little as possible until the intersection set is reached.

A synthetic example illustrates the method. First, a reservoir model is constructed 21 x 21 horizontal cells and 201 vertical cells, with an acoustic impedance value assigned to each cell. This synthetic model is equivalent to a volume of about 1 km x 1 km horizontally and 100 milliseconds. From this are generated two data sets that would be measured if the reservoir were real: first, a log of acoustic impedance in a well through the center of the model; second, the surface seismic response, which displays a lower spatial resolution than the original model. 

The challenge is to reconstruct the original acoustic impedance model using the log and seismic data only. A reasonable starting model can be obtained from a simple extrapolation of the well log data. This clearly fails to reproduce structural variations away from the well that appear in the original model. However, modifications to this first guess using in addition the surface seismic data produces a reconstruction that is much closer. 

Monday, October 22, 2018

Obtaining Reservoir Engineering Parameters in Each Layer

Once the reservoir geometry has been defined, if not actually computed, one step remains before synthesizing the complete reservoir model. This is the estimation of key reservoir engineering parameters in each defined interval across the areal extent of the reservoir. Key parameters are net thickness, porosity, oil, gas and water saturations, and horizontal and vertical permeabilities. The computation proceeds in two stages. 

First, in each well the parameters must be averaged for each interval from the petrophysical interpretations.  This is performed in the component property module and relies on careful selection of cutoffs to exclude sections of formation that do not contribute to fluid movement. Choice of cutoffs is made with the help of sensitivity plots showing how the averaged parameter varies with cutoff value, and preferably in a well with well-test data to validate the cutoff choices. 

Second, the averaged parameters for each interval must be gridded or mapped across the reservoir. In the log property mapping module, the RM package brings into play powerful algorthms that use seismic data to guide the mapping. The key to the method is establishing a relationship at the wells between some attribute of the seismic data and a combination of the averaged well parameters, and then using the relationship to interpolate the averaged parameter everyhwere in the reservoir. The seismic attribute could be amplitude, or acoustic impedance calculated earlier using the inversion module, or one of several attributes that are routinely calculated on seismic interpretation workstations and the imported to the RM system, or simply depth.

The relationship may be linear - that is, the combination of averaged parameters is defined as a simple weighted sum of seismic attributes -or nonlinear, in which an elaborate neural network approach juggles several linear relationships at the same time, picking the best one for given input. Linear relationships easily handle smooth dependecies such as between acoustic impedance and porosity. The nonlinear approach is required for averaged parameters, such as saturations, that may vary abruptly across a field.

In practice, the log property mapping module guides the interpreter through the essential stages: choosing the interval to map, comparing seismic data at the well intersections with the averaged well data, establishing relationship that show a good degree of correlation and then proceeding with the mapping. The advantage of log property mapping over conventional mapping was demonstrated in both the Conoco Indonesia, Inc. and Pertamina Sumbagut case studies. Research continues into finding ways of using all available data to assist the mapping of log data across the reservoir.

Building the Reservoir Model and Estimating Reserves

The stage is set for the RM package- the Model Builder. This module fully characterizes the reservoir by integrating the geometric interpretation  established with the correlation and section modeling modules, including definitions of reservoir tanks and fluid levels, with the reservoir engineering parameters established using the component property and log property mapping modules.

 The main task is constructing the exact shape of the reservoir layers. This is achieved by starting at a bottom reference horizon and building up younger layers according to their assigned descriptors, mimicking the actual process of deposition and erosion. For example, if a layer top has been defined as sequential and conformable, it will be constructed roughly parallel to the layer's bottom horizon. If a reference horizon has been described as an unconformity, then underlying layers can approach it at any angle, while layers above can be constrained to track roughly parallel. 

The areal bounds on layers are determined within the model builder module by severeal factors. First, spesific geometries can be imported. Second, areal bounds may be implied through the geometries created with the section modeling module. Third, the contours of petrophysical parameters estabished during log property mapping can establish areal limits. Fourth, thickness maps of layers can be interactively created and edited prior to model building.

The key dividen of model building is the establishment of reserve estimates for each tank. Oil in place, total pore volume, netpay pore volume, water volume, reservoir bulk volume, net-pay area and net-pay bulk thickness are some of parameters that can be calculated and tabulated on the workstation. Conoco Indonesia Inc.'s estimates using the RM package were in close agreement with standard calculation procedures. During appraisal, when the oil company decides whether to proceed to development, establishing reserve estimates is crucial. As a result, the many steps leading to this moment will be reexamined and almost certainly rerun to assess different assumptions about the reservoir. 

Say, for example, a geologist is working on correlating logs and creating geologic tops, while the geophysicist is preparing an inversion to obtain acoustic impedance. If both want to work concurrently, the version manager simply grows two branches. 

Similiarly, a reservoir engineer may wish to try several scenarios for mapping the distribution of porosity within a layer-say by mapping well log values only and alternatively by using seismics to guide the mapping with the log property mapping module. Two versions can be made in parallel with a branch for each scenario. Several further steps along each interpretation path may be necessary before it becomes clear which mapping technique is better. 

Material Balance Analysis and Preparation for Simulation

For reservoir managers striving to improve the performance of developed fields - for example, investigating placement of new wells or reconfiguring existing producers and injectors to improve drainage- the RM package has two more modules to offer. One provides a sophisticated material balance analysis that assesses whether the established reservoir model is compatible with historical production data.  The second converts the reservoir model into a format suitable for simulating reservoir behavior and predicting future production.

 Material balance analysis is performed using Formation reservoir test system module. In traditional material balance analysis, reservoir volume is estimated by noting how reservoir pressure decreases as fluids are produced. The more fluids produced, the greater the expected pressure decrease. Exactly how much depends on the compressibility of the fluids, which ban be determined experimentally from down-hole samples through pressure-volume-temperature (PVT) analysis, the compressibility of the rock, which can be determined from core samples in the lab, and , of course, reservoir volume. Faster declines in pressure than expected from such an analysis might indicate a smaller reservoir than first thought. Slower declines might indicate a high-volume aquifer driving production, or less rarely, connected and as yet undiscovered extensions to the reservoir. This traditional anaysis of reservoir size and drive mechanism requires no a priori knowledge of reservoir geometry, only production, pressure and PVT data. 

The module uses these basic principles of material balance, but applies them within the geometrically defined reservoir tanks of established reservoir model. This allows not only verification of tank volumes, but also estimation of fluid communication between tanks. Communication between tanks could be due to an intervening low-permeability bed or a fault being only partially sealing. Another result is the prediction of how fluid contacts are moving.

Sunday, October 14, 2018

Correlating Seismic and Well Data

Correlation is performed in several stages. The first is establishing geologic tops on each well using the detailed correlation module. With individual well data displayed for up to four wells simultaneously, the interpreter can correlate horizons from one well to the next, registering consistent geologic tops in every well across the field. All well data have the potential to aid in this process, with core information, petrophysical log interpretations, wireline testing results and production logs equally able to contribute to identifying significant geologic horizon. 

The next step signals the beginning of the merging of seismic and well data. In the well tie module, the 3D seismic trace at a given well is displayed versus two-way time alongside all pertinent well data, which are already converted to time using borehole seismic or check-shot data. The main purpose of this combined display is to tie events recognized on the seismic trace -seismic horizons - to the recently established geologic tops found from the well data. These ties, or links, between the two data types are crucial at several subsequent stages during the construction of the reservoir model. In addition, seismic markers found at this stage can be transferred to a seismic interpretation workstation for horizon tracking.

The first use of the tie, or link, between seismic and well data is in the velocity mapping module that enables the 3D seismic record versus time to be converted to a record versus depth. This crucial step subsequently allows the seismic data to guide the mapping of geologic horizons between wells. 

A velocity map for each layer is first assessed from the stacking velocities used in the 3D seismic processing. These are average velocities to the depth in question and must be converted to interval velocties using Dix's formula. The interpreter then maps these velocities for a given horizon, using one of four available algorithms including the sophisticated kriging technique, and reviews their appearance in plan view. Gradual changes in velocity are normal, but anomalies such as bull's eye effects -isolated highs or lows - that are geologically unacceptable can be edited out. 

Next, values of velocity at the intersections of horizons with wells are compared with velocity values obtained from acoustic log or borehole seismic data. The differences, determined in all wells, are also mapped and then used to correct the original velocity map. Finally, the corrected velocity map is used to convert the 3D seismic record to depth. To check the result, structural dip azimuth as estimated from dipmeter logs can be superimposed on the resulting map-structural dip azimuth should follow the line of greatest slope as indicated by map. 

With seismic data converted to depth, the interpreter can begin building a stratified model of the reservoir using the correlation module. First, seismic data acting as a guide allow geologic tops in one well to be firmly correlated with tops in adjacent wells. This display may be further enhanced by superimposing dipmeter stick plots and other forms of dipmeter interpretation along the well trajectories. Another display that shifts data to an arbitrary datum, generally an already correlated horizon, provides a stratigraphic perspective. Second, each geolgoic corrrelation is allocated descriptors that determine how it relates geometrically to its neighbors above and below. These descriptors are later used to build up the actual reservoir model. Third, all available information about reservoir compartemantilization -for example, saturation interpretations from well logs and wireline testing results- are used to identify flow barriers, such as a sealing fault, so the reservoir can be divided into a set of isolated volumes called tanks, essential for correctly estimating reserves.

Sometimes, the interpreter may want to manually dictate the geometry of a horizon or other feature - such as fault, bar, channel, etc.  - rather than let it be guided by established horizons on the 3D seismic data. This can be accomplished using the section modeling module, which offers an array of graphic tools to create and edit elements of the reservoir model in the vertical section. This labor-intensive manual creation of a reservoir model becomes mandatory when there are no seismic data or only sparse 2D data.

One source of data that may contribute to the definition of tanks and faults is the well test. Well tests give an approximation of tank size and , in particular, provide distance estimates from the well to sealing faults. Azimuth to the fault, however, is undetermined.

Thursday, October 11, 2018

Integrated Reservoir Interpretation

Every field is unique , and not just in its geology. Size, geographical location, production history, available data, the field's role in overeall company strategy , the nature of its hydrocarbon  - all these factors determine how reservoir engineers attempt to maximize production potential. Perhaps the only commonality is that decisions are ultimately based on the best interpretation of data. For that task, there is a variability to match the fields being interpreted.

In an oil company with separate geological, geophysical and reservoir engineering departements, the interpretation of data tends to be sequential. Each discipline contributes and then hands over to the next discipline. At the end of the line, the reservoir engineer attempts to reconcile the cumulative understanding of the reservoir with its actual behavior. Isolated from the geologist and geophysicist who have already made their contributions, the reservoir engineer can play with parameters such as porosity, saturation and permeability, but is usually barred, because of practical difficulties, from adjusting the underlying reservoir geometry.

The scenario is giving way to the integrated asset team, in which all the relevant  disciplines work together, hopefully in close enough harmony that each individual's expertise can benefit from insight provided by others in the team. There is plenty of motivation for seeking this route, at any stage of a field's development. Reservoirs are so complex and the art and science of characterizing them still so convoluted, that the uncertainties in exploitation, from exploration to maturity, are generally higher that most would care to admit. 

In theory, uncertainty during the life of a field goes as follows: During exploration, uncertainty is highest. It diminishes as appraisal wells are drilled and key financial decisions have to be mad regarding expensive production facilities- for offshore fields, these typically account for around 40% of the capital outlay during the life of the field. As the field is developed, uncertainty on how to most efficiently exploit it diminishes further. By the time the field is dying, reservoir engineers understand their field perfectly.

A realistic scenario may be more like this: During exploration, uncertainty is high. But during appraisal, the need for crucial decisions may encourage tighter bounds on the reservoir's potential than are justifiable. Later, as the field is developed, production fails to match expectations, and more data, for example  3D seismic data, have to be acquired to plug holes in the reservoir understanding. Uncertainties begin to increase rather than diminish. They may even remain high as parts of the field become unproducible due to water breakthrough and as reservoir engineers still struggle to fathom the field's intricacies.

Asset teams go a long way toward maximizing understanding of the reservoir and placing a realistic uncertainty on reservoir behaviour. They are the best bet for making most sense of the available data. What they may lack, however, are the right tools. Today, interpretation is mainly performed on workstations with the raw and interpreted data paraded in its full multidimensionality on the monitor. Occasionally, hard-copy output is still the preferred medium - for example, logs taped to walls for correlating large numbers of wells. 

There are workstation packages for 3D seismic interpretation, for mapping, for viewing different parts of the reservoir in three-dimensions, for petrophysical interpretation in wells, for performing geostatistical modeling in unsampled areas of the reservoir, for creating a grid for simulation , for simulating reservoir behavior, and more.  But for the reservoir manager, these fragmented offerings lack cohesion. In a perceived absence of an integrated reservoir management package, many oil companies pick different packages for each spesific application and then connect them serially.

Any number of combinations is possible. The choice depends on oil company preferences, the history of the field and t he problem being adressed. Modeling a mature epehant field in the Middle East with hundreds of wells and poor seismic data may require a different selection of tools than a newly discovered field having high-quality 3D seismic coverage and a handful of appraisal wells. Reservoir management problems vary from replanning an injection strategy for mature fields, to selecting horizontal well for optimum recovery , to simply estimating the reserves in a new discovery about to be exploited.

Whatever the scenario, the tactic of stringing together diverse packages creates several problems. First is data compatibility. Since the industry has yet to firm up a definitive geoscience data model, each package is likely to accept and output data in slightly different ways. This forces a certain amount of data translation as the interpretation moves forward-indeed, a small industry has emerged to perform such translation. Second, the data management system supporting this fragmented activity must somehow keep track of the interpretation as it evolves. Ideally, the reservoir manager needs to know the history of the project, who made what changes, and if ncessary how to backtrack. 

Postprocessing Seismic Data

The interpretation path obviously depends on the data available. For two of the three fields considered here, there were excellent 3D seismic data. And in all three fields, there was at least one well with a borehole seismic survey. The first goal in working with seismic data is to ensure that the borehole seismic and the surface seismic at the borehole trajectory look as similiar as possible. If that is achieved, then the surface seismic can be tightly linked to events at the borehole and subsequently used to correlate structure and evaluate properties between wells. If no borehole seismic data are avaialble, an alternative is to use synthetic seismograms, computed from acoustic and density logs.

Differences in seismic data arise because of difficulties in achieving a zero phase response, a preferred format for displaying seismic results in which each peak on the trace corresponds exactly to an acoustic impedance contrast, and , by inference, geologic interface. Processing seismic data to obtain a zero-phase response depends on accurately assessing the signature of the acoustic source. This is straightforward in borehole seismics because t he measured signal can be split into its downgoing and upgoing componentss, and the former yields the source signature. In surface seismics, the downgoing field is unmeasured and statistical techniques must be used to assess the signature, leading to less reliable results.  In Conoco Indonesia, Inc.'s field, the surface seismic and borehole seismic data initially matced poorly. With the residual processing module, the mismatch is resolved by comparing the frequency spectra of the two data sets and designing a filter to pull the surface seismic data into line with the borehole seismic data. In this case, the postmatch alignment is excellent.

 However, if the alignment resulting from this treatment remains poor, it may prove necessary to vary the design of the filter versus two-way time. This is achieved by contructing filters for several specific time intervals along the well trajectory and then interpolating between them to obtain a representative filter at any two-way time. 

 The next step is to perform a seismic inversion on the matced seismic data, using the inversion module. This process converts the matced seismic data to acoustic impedance, defined as the product of rock density and acoustic velocity. Acoustic impedance can be used to classify lithology and fluid type. Mapped across a section in two dimensions or throughout space in three dimensions, acoustic impedance provides a valuable stratigraphic correlation tool. For Conoco Indonesia, Inc., inversion provided valuable insight into the lateral extent of the reservoir. 

The inversion computation requires the full spectrum of acoustic frequencies. Very low frequencies are missing from the seismic record, so these are estimated interactively from acoustic well logs. Between wells, this low-frequency information is interpolated  , and for a 3D inversion, the information must be mapped everywhere in reservoir space.


Wednesday, October 3, 2018

Beating the Exploration Schedule With Integrated Data Interpretation

Oil companies have made great strides in improving the success rate of exploration and production, mainly by using seismic data to guide well placement. But data gathering and interpretation do not stop here. Once a well is spudded, the company commits considerable resources to gather more data such as mud logs, cores, measurements-while-drilling (MWD) information and wireline logs. This creates a huge volme of data - often at different scales and qualities - that must be efficiently handled to ensure maximum return on the exploration investment. 

Communications hardware already allows large volumes of information to be moved rapidly from one computer to another  even from remote locations. And data management and universal data standards are gradually being introduced throughout the industry to facilitate data access.

To take full advantage of this new environment, however, geoscientist need their interpretation applications integrated into a common system that allows data sharing.

Until recently, no attempt was made to standarize the data format used by interpretation software packages, which meant that they could not communicate with each other or use a common database. More time was spent on converting and loading data than on interpretation. Applicatons often ran in series rather than parallel, introducing further delays. This resulted in drilling or completion decisions being made using incomplete answers while full interpretations took weeks or months.


Friday, August 10, 2018

Geophysical Interpretation: From Bits and Bytes to the Big Picture

Well logs measure reservoir properties at interval of a few inches, providing a high density of information mostly in the vertical direction. But the volume of reservoir sampled by logs represents only one part in billions. Seismic data, on the other hand, cover the overwhelming majority of reservoir volume but at lower vertical resolution. A processed three-dimensional (3D) seismic survey may contain a billion data points sampling a couple of trillion m3 and some surveys are 10 times bigger. The geophysical interpreter must handle this massive amount of information quickly and produce a clear 3D picture of the reservoir that can guide reservoir management decisions.

In the overall seismic scheme, interpretation builds upon the preceding work of acquisition and processing. Fast new ways to simultaneously visualize and interpret in three dimensions are changing how interpreters interact with geophysical data. Seismic interpretation packages band together a collection of tools designed to simplify seismic interpretation and smooth the road from input to output. GeoQuest's seismic interpretation tools -Charisma, IESX systems - offer a variety of levels of user-friendliness and sophistication. These packages complete the process in roughly four steps-data loading, interpretation, time-to-depth conversion, and map output. This article takes a look at how they help the geophysical interpreter harness a seismic workstation filled with a billion data points -and make it fun.

 Getting Data in the Right Place

By the time 3D data arrive at the interpretation workstation, they have already undergone numerous quality control checks, and are ready to be loaded. The objective in data loading is to ensure that as much of the available data as possible is loaded onto the computer, and that these data points are correctly positioned. Data loading continues to be simplified by software advances.

Fitting all the data onto the computer has been difficult because disk space has been expensive. To work around the problem, most data loading routines convert seismic traces from SEG-Y format to a compressed workstation format. This compression can be perilous, because it reduces dynamic range of the trace data. SEG-Y data are usually represented in 32-bit floating point format, which allows a range of +/- 10 ^37 . Data in 16 bit format have a range of +/- 32,768. Converting data from 32-bit to 8-bit reduces computer storage requirements by a factor of four, but also reduces dynamic range. Reducing dynamic range may negate much of the care and money that went into acquisition and processing of the seismic data. Although the dynamic range of compressed data is usually more that the human eye can perceive, computer-driven interpretation can be made to take advantage of 32-bit data. Some specialist recommend that data never be compressed, and since disk space is becoming less expensive, that will eventually become a more widespread option.

When compression is necessary, workstations can help the interpreter do it intelligently through scaling. Scaling ensures that data amplitudes are properly sized so that the most important information is preserved when trace values are converted from SEG-Y format to compressed format. In the Charisma system, scaling must be user-controlled and different scale factors can be tested; this allows flexibility, but usually requires practice. In the IESX systems, scaling is done automatically, trace by trace. The scaling factor is stored in the header of each trace. The factor is reapplied to the each trace it is read from the data base. This results in a reconstructed 32-bit seismic section, regardless of the storage format. 

Loading seismic data in the righ place in the computer involves assigning a geographic location to each trace. For 3D data this is simpler than for 2D: inputs are the spatial origin and orientation of the volume, the order and spacing of the shot lines, and the trace spacing. From these few numbers, geographic coordinates for each of thousands or millions of traces can be computed. 

If there are older 2D or 3D data, or offset seismic profiles (OSPs) to be interpreted with the currently loaded 3D survey, data loading becomes more complicated. Trace locations for each 2D line or OSP must be accessed from separate navigation files or from the trace header themselves. Data of different vintages, amplitudes and processing chains must also be reconciled. This is not a trivial task, but is greatly eased with today's workstations. 

Additional data that can be loaded include well locations, well deviation surveys, log data, formation tops, stacking velocities from seismic processing, time-depth data from well seismic surveys and cultural or geographic data such as lease boundaries or coastlines. 

In 3D surveys, the seismic lines shot during the survey are called inline sections or rows, Vertical slices perpendicular to these , called crossline sections or columns, can be generated from the inline data . In 3D land surveys, the acquisition geometry can be more complicated than marine surveys, but usually the inline direction is taken to be aling receiver lines. In both cases, horizontal slices cut a constant time are called time slices. 

The way seismic data are stored by different systems affect the time required to generate new sections and display or perform other poststack processing. In the Charisma and IES systems, inline sections, crossline sections and time slices are stored separately, so a single data value may be stored up to three times. In the IESX system, every inline trace is stored only once, decreasing data storage volume. In such a volume there is no need to generate crosslines because arbitrary vertical sections may be cut in any orientation in real time. Horizontal seismic data are stored in a separate file.

Until recently, 3D data loading routines were not user friendly, often requiring a computer specialist. But new applications are beginning to make this step more straightforward, allowing interpreters to load their data alone or with support over the telephone. However, most companies still employ dedicated data loaders, or use contract workers.

Tracking Continuities and Discontinuities

Now we come to the real interpretation part of the job- identifying the reservoir interval and marking, either manually or automatically, important layer interfaces above, with and below it. The interfaces, called horizons, are reflections that signify boundaries between two materials of different acoustic properties. Interpretation also includes identifying faults, salt domes and erosional surfaces that cut horizons.

Some interpreters first pick horizons as far as possible horizontally on a set of vertical sections, the outline faults. Other interpreters pick faults first, then pick horizons up to their intersections with faults. The choice depends on personal preference and experience. Horizon shallower than the reservoir should be interpreted because they affect horizons below. Interpretation of horizons outside the reservoir interval is important if they correspond to regional markers that can be picked from logs. Interpreting several horizons that bracket the target zone may also be used to enhance time-to-depth conversion and give clues to geologic history.

Knowing which horizons correspond to the reservoir comes from previous experience in the area, such as earlier 2D seismic lines. This is usually accomplished by tying 3D data to an existing 2D line or well. Tying a seismic line to a well is done by comparing an expected seismic trace at the well with real seismic data. This is achieved with synthetic seismograms. To create a synthetic, the sonic and density logs are converted to time, often by using a check-shot survey. Next, the sonic and density logs are combined to give an acoustic impedance log- the product of velocity and density. Then, through an operation called convolution , a pulse trace that mimics the seismic source is used to change the acoustic impedance log into a synthetic seismic trace.

Now it's time to compare the synthetic with the seismic data at the well. Geologic boundaries , such as top of the reservoir, are identified in the original logs. The boundaries are then correlated with the time-converted logs, acoustic impedance log and then the synthetic seismogram. Waveform charaacteristics of the synthetic are compared with the real seismic trace to determine the seismic representation and travel time to the geologic boundaries at the well location. However, at seismic wavelengths -50 to 300 ft -what appears to be one layer in the seismic section will normally be several layers in the logs. A main use, then, of tracking horizons in seismic data is not to distinguish thin layers, but to provide information about the continuity and geometry of reflectors to guide mapping of layer properties between wells. 

To track a horizon, trace characteristics are followed horizontally across the whole seismic survey. Common characteristics used to track an event are the polarity or change in polarity of the trace. At any time, a trace will be of either negative or positive polarity, or a zero crossing. A positive polarity reflection, or peak, indicates an increase in acoustic impedance, while a negative polarity reflection or trough, indicates a decrease in acoustic impedance. A zero crossing is a point of no amplitude, usually between a negative and positive portion of a seismic trace. The amplitude of the peaks and troughs is usually color coded. A wide range of color schemes allows interpreters to accent features to be tracked.

A horizon may be tracked in a variety of ways. Points on the horizon may be manually picked by clicking with the mouse on a visual display of a vertical section. If the seismic signal is sufficiently continous, the horizon may be tracked automatically using a toll called an autotracker. Autotracking  requires the interrpeter to specify the signal characteristics of the horizon to be tracked. These include polarity, a range of amplitude and a maximum time window in which to look for such a signal. Given a few seed points, or handpicked clues, autotrackers can pick a horizon along a single seismic line or through the entire data volume. In faulted areas, autotrackers can usually be used if seed points are picked in every fault block. Horizons picked with autotrackers must be quality checked manually and may require editing by an interpreter. Still, the time savings is huge compared to manually picking thousands of lines.

If the horizon is difficult to follow, the data can be manipulated using processing applications available within most interpretation systems. The charisma processing toolbox, for example, includes a variety of filters and other options to produce data that are easier to interpret, without expensive reprocessing. Dip filters suppress noise outside a specified dip range and highpass filters can reveal discontinuities. Other processes include deconvolution to extract an ideal impulse response from real data, time shifts to align traces, polarity reversals and phase rotations to match data with different processing histories, scaling to boost amplitudes of deep reflections, and time varying filters to compensate for wave attenuation.

Some horizons defy reprocessing efforts, and remain too complex to track with  conventional autotrackers. Three examples are: (1) reflections that change polarity along the horizon in response to a lateral change in lithology or fluid content; (2) a local minimum that is positive or a local minimum that is positive or a local maximum that is negative; and (3) horizons that are laterraly discontinuous. Surfaceslice volume interpretation helps track these tricky horizons by displaying what might be thought of as "thick" time slices. The SurfaceSlice application was developed at Exxon Production Research and has been incorporated into GeoQuest's IESX system. 

The surfaceslice method can be thought of as scanning the 3D cube to create a new seismic volume that contains only samples that meet some criteria set by the interpreter, such as local troughs with a given amplitude range. Thick slices through the volume are displayed in a chosen color scheme. The slices contain only data on the t ypes of horizons of interest. SurfaceSlice slices resemble a series of contour maps, and are therefore convenient for geologist to interpret. Slice thickness is interactively controlled by the interpreter , and is usually chosen to be less than the wavelength of the reflection in order to stay on the chosen horizon. Multiple windows show a series of slices at increasing times in which the horizon can be rapidly tracked in areal swaths rather than line by line. 

Once picked, either manually, by autotracking or by surfaceslice analysis, the horizon serves multiple purposes. Shallow horizons can be flattened to give a rendition of the underlying volume at the time of their deposition. A horizon, really a set of time values draped on a grid of trace locations, may be linked to a formation marker identified in well logs. If the marker has been picked in several wells, this serves as a consistency check on the seismic interpretation. This link may be used later for time-depth conversion or for extending formation properties away from wells. 

Faults and other discontinuities may be picked manually with the mouse in two ways. As in 2D interpretation, classic fault interpretation is done on vertical sections - either inline, crossline or other sections retrieved at any desired azimuth. A fault picked on one section can be projected onto nearby sections to give the interpreter an idea where to look for the next fault pick. Thrust fault and high-angle structures such as salt domes require special handling, because a given horizontal location may have multiple vertical values. A new way of picking faults, made possible by 3D workstations, allows the interpreter to identify faults from discontinuities in time slices. 

Another interpretation technique that takes advantage of the 3D nature of data storage is called atribute analysis. Every seismic trace has characteristics, or attributes, that can be quantified, mapped and analyzed at the level of the horizon. And through mapping a horizon is based more or less on the continuity of the seismic reflection, attributes can vary in many ways along the horizon. Traditional trace attributes include the amplitude of the reflection, its polarity, phase and frequency.  These trace atributes were introduced years ago to highlight continuities and discontinuities in 2D seismic section. Now , with the addition of high-speed 3D workstations, interpreters have the freedom to explore new types of attributes. Attributes such as dip and azimuth of horizons can instantly reveal discontinuities and faults that could take weeks to interpret manually. Interpreters are also using attributes to apply attributes to apply sequence stratigraphy to 3D data. 

The reservoir takes shape

An advantage of 3D workstations is their speed  compared to a pencil-and-paper job; autotrackers lift some of the workload from interpreters, letting them do more in less time. Other advantages , such as time slices, surfaceslice displays and attribute maps, are techniques made possible because the data reside in 3D on a workstation. But the seismic sections are still 2D representations of 3D information, and interpreters still perform quantitative interpretation of 2D.

This is changing as more interpreters use the full 3D-visualization capabilities of new workstations. The ability to see the data volume, to zoom and change perspective, gives interpreters new insight into the features they interpret on horizons. Proper illmination makes surfaces easier to understand. Changing the light source to a grazing elevation can highlight subtle features such as faults and fractures, for the same reason that the best aerial photos of the earth's surface are shot in early morning or late afternoon to maximize shadows. More advanced workstations allow interpreters to illuminate horizons with lights from different locations and change the reflective properties of surfaces. Interpreters can spend less time figuring out what the structure is, and more time understanding how it can affect development decisions. A rainbow colored contour map, once a marvel of the seismic screen, pales next to a 3D rendering of the same surface. 

Structures that appear obscure or disconnected when examined in 2D seismic views may become clear or continuous in 3D. Or just as importantly, features that appear connected in one perspective may be disjointed in another. Seismic properties between two deviated wells, either existing or proposed, can be examined by extracting the seismic image on the twisted plane between them. This gives reservoir planners a toll for verifying reservoir connectivity, whether for exploration purposes or for planning improved recovery campaigns. Well logs, interpreted horizons, faults and other structures can be viewed and moved, alone, or along with the seismic data. 

Today, the most powerful 3D visualization products provide real-time interaction with the 3D image for lighting, shading , rotation and transparency. However, interaction with the image for creating and editing interpretation has typically been limited.

Time-to-Depth Conversion

Once horizons and structures are interpreted in time, the next step is to convert the interpretation to depth. The relationship between time and depth is velocity, so a velocity model is needed. Different workstation systems exhibit varying degrees of sophistication in their creation of velocity models for time-to-depth conversion. Most systems offer simple geometrical conversions based on velocity models that may vary vertically and horizontally. These convert points from time to depth by moving them in straight vertical lines. The Charisma DepthMap package includes geophysical modeling in the form of seismic ray tracing and permits lateral translation of points ot perform time-to-depth conversion with increasing reliability.

If more than one horizon is to be converted to depth, an average velocity to each horizon must be estimated, or the average velocity to the shallowest horizon and the velocity between each horizon down to the target horizon.

In the absence of logs or well seismic surveys, seismic stacking velocities can subtitute for average vertical velocities. Stacking velocities are derived from seismic data during processing, and used to combine seismic traces to produce data that are easier to interpret. They contain large components of horizontal velocity and are usually available at 500-m to 1-km spacing across the survey area. These data are interpolated to the same sample interval as the seismic time horizon grid. Then the velocity grid is multiplied by the time grid to give a depth grid. The key limitation of stacking velocities is their lack of accuracy, especially in regions of complex velocity or of complex structure. 

Time-depth data from a check-shot survey give an accurate vertical velocity model, but only at the check-shot location. In the absence of other data, this velocity can be used uniformly across the field to convert the seismic times to depth. Stacking velocities can be calibrated at the well using check-shot surveys.

A synthetic seismogram built from sonic and density logs can provide a comparison trace for time-to-depth conversion. Disadvantages of this technique are the limited extent of logs- most logs do not provide information all the way to the surface -and the discrepancy between velocities measured at sonic frequencies. Synthetics are most useful when calibrated with a check-shot survey, which improves the time-to-depth conversion.

Velocity models and images from VSPs are the most powerful data for converting surface seismic times to depth. VSPs sample velocities at more depths than check shots, and unlike synthetic seismograms created from sonic logs, VSPs have a frequency content similiar to that of surface seismic waves. And above all, VSPs provide images that can be matched directly to surface seismic sections. 

Putting it all on the map

Once data about reservoir structures are stored, 2D and 3D map images can be generated for reservoir characterization. Surfaces may be mapped in time, or , if there is a velocity model, in depth. Basic mapping tools for this reside within most seismic interpretation packages, and there are also separate, stand-alone mapping packages that accept seismic interpretations for map generation.