Friday, December 29, 2017

Sequence Stratigraphy

No exploration technique flawlessly locates a potential reservoir, but sequence stratigraphy may come close. By understanding global changes in sea level, the local arrangement of sand, shale and carbonate layers can be interpreted. This enhanced understanding of depositional mechanics steers explorationists toward prospects missed by conventional interpretation.

 Conventional lithologic correlation maps formation tops by interpreting well log data alone. It looks at what is there without taking into account how it got there. Sequence stratigraphy combines logs with fossil data and seismic reflection patterns to explain both the arrangement of rocks and the depositional environment. Understanding the relationships between rock layers, their seismic expression and depositional environments allows more accurate prediction of reservoirs, source rocks and seals, even if none of them intersect the well. 

Sequence stratigraphy is used mainly in exploration to predict the rock composition of a zone from seismic data plus distant, sparse well data. It also assist in the search for likely source rocks and seals. Experts believe that as more people learn the technique, it will become an exploitation tool for constraining the shape, extent and continuity of reservoirs.

 Sequence stratigraphy, seismic stratigraphy - How many stratigraphies can there be?

Stratigraphy is the science of describing the vertical and lateral relationships of rocks. These relationships may be based on rock type, called lithostratigraphy, on age, as in chronostratigraphy, on fossil content, labeled biostratigraphy, or on magnetic properties, named magnetostratigraphy.

At the turn of the century, shoreline movement was attributed to tectonic activity - the rising and falling of continents. This view was challenged in 1906, when Eduard Suess hypothesized that changes in shoreline position were related to sea level changes, and occured on a global scale; he called the phenomenon eustasy. However, Suess was not able o refute evidence presented by opponents of his theory - in many locations there were discrepancies between rock types found and types predicted by sea level variation. 

In 1961 , Rhodes W. Fairbridge summarized the main mechanisms of sea level change : tectono-eustasy, controoled by deformation of the ocean basin; sedimento-eustasy, controlled by addition of sediments  to basins, causing sea level rise ; glacio-eustasy, controlled by climate , lowering sea level during glaciation and raising it during deglaciation. He recognized that all these causes may be partially applicable, and are not mutually incompatible. He believed that while eustatic hypotheses apply worldwide , tectonic hypotheses do not and vary from region to region. Fairbridge summarized the perceived goal at the time: "We need therefore to keep all factors in mind and develop an integrated theory. Such an ideal is not yet achieveable and would involve studies of geophysics, geochemistry, stratigraphy, tectonics, and geomorphology, above sea level and below.

This brings us nearly to the present. In 1977, Peter Vail at Exxon and and several colleagues published the first installments of such an integrated theory. Vail developed a new kind of stratigraphy based on ideas proposed by L.L Sloss - the grouping of layers into unconformity-bound sequences based on lithology  - and by Harry E. Wheeler - the grouping of layers based on what has become known as chronostratigraphy. Vail's approach allowed interpreting unconformities based on tying together global sea level change, local relative sea level change and seismic reflection patterns. This methodology, named seismic stratigraphy, classifies layers between major unconformities based on seismic reflection patterns, giving a seismically derived notion of lithology and depositional setting.

Subsequent seismic stratigraphic studies in basins around the world produced a set of charts showing the global distribution of major unconformities interpreted from seismic discontinuities for the pas 250 million years. An understanding emerged that these unconformities were controlled by relative changes in sea level, and that relative changes in sea level could be recognized on well logs and outcrops, with or without seismic sections. This led to the interdisciplinary concept of sequence stratigraphy- a linkage of seismic, log, fossil and outcrop data at local, regional and global scales. The integrated theory sought by Fairbridge had arrived. 

The concepts that govern sequence stratigraphic analysis are simple. A depositional sequence comprises sediments deposited during one cycle of sea level fluctuation - by Exxon convention, starting at low sea level, going to high and returning to low. 
One cycle may last a few thousands to millions of years and produce a variety of sediments, such as beach sands, submarine channel and levee deposits, chaotic flows or slumps and deep water shales. Sediment type may vary gradually or abruptly, or may be uniform and widespread over the entire basin. Each rock sequence produced by one cycle is bounded by an unconformity at the bottom and top. These sequence boundaries are the main seismic reflections used to identify each depositional sequence, and separate younger from older layers everywhere in the basin.

Composition and thickness of a rock sequence are controlled by space available for sediments on the shelf, the amount of sediment available and climate. Space available on the shelf -which Vail calls "shelfal accomodation space" is a function of tectonic subsidence and uplift and of global sea level rise and fall on the shelf. For instance, subsidence during rising sea level will produce a larger basin than uplift during rising sea level. The distribution of sediments depends on shelfal accomodation, the shape of the basin margin-called depositional profile -sedimentation rate and climate. Climate depends on the amount of heat received from the sun. Climate also influences sediment type, which tends toward sand and shale in temperate zones and allows the production of carbonates in the tropics. 

As an exploration tool, sequence stratigraphy is used to locate reservoir sands. In deep water basins with high sedimentation rates, sands are commonly first laid down as submarine fans on the basin floor and later as deposits on the continental slope or shelf. But as sea level starts slowly rising onto the continental shelf, sands are deposited a great lateral distance from earlier slope and basin deposits. Deposits during this time are deltaic sediments that build into the basin and deep water shales. If the sediment supply cannot keep pace with rising sea level, the shoreline migrates landward and sands move progressively higher up the shelf.  Once sea level reaches a maximum for this cycle, sands will build basinward as long as sediments remains available. The sequence ends with a fall in relative sea level, marked by break in deposition. The sequence repeats, however, as long as there is sediment and another cycle of rise and fall in relative sea level that changes the shelfal accomodation space.

Processing for AVO Interpretation

Any properly acquired seismic survey, new or old, can be processed for AVO analysis. The goal of processing is to preserve reflected pulse shape and amplitude. Changes in pulse with offset can then be interpreted in terms of lithology or fluid contrasts at the reflector. Data destined for stratigraphic interpretation or lithostratigraphic inversion also benefit from true amplitude processing. Every seismic data set creates its own processing problems, requiring a tailor-made processing sequence. Here is a typical AVO processing sequence, one that works for the data sets described in this article.

Basic Steps

True Amplitude Recovery (TAR)compensates for amplitude loss caused by wavefront spreading and low transmission quality (Q) of the rock through which the seismic wavefront travels.

Frequency wave number (F-K) filtering is required to attenuate coherent noise generated by near-surface or seabed features such as rigs, buildings and seabed channels. Ground roll, or surface waves, common in land data, cannot usually be removed with this method. Correctly designed receiver arrays can solve this problem.

Generalized  Radon Transform (GRT) demultiple reduces amplitude of multiples (interbed or water column reverberations) relative to primary energy. Conventional demultiple techniques do not preserve true amplitudes, nor do they eliminate all multiples. The GRT demultiple separates seismic arrivals by differences in their apparent velocities, then suppresses multiples by an inverse transform of only part of the data.

Deconvolution creates a new trace with wiggles that indicate the location (in time) and the strength of each reflector. Surface-consistent deconvolution reduces pulse shape distortion because the filter is the same for each shot and receiver location.

Surface-consistent scaling and residual statics correct amplitudes and arrival times of raypaths distorted by near-surface anomalies, such as those caused by the unconsolidated ("weathered") zone on land or a rough ocean bottom.

Velocity analysis and normal moveout (NMO) create and apply the velocity model that aligns wiggles from all offset. In conventional seismic processing, velocity analyses are made every 2 to 3 km. Because most AVO anomalies are caused by velocity variation, closely spaced velocity analysis are required every 0.25 km.

Friday, December 22, 2017

Interpretation of Actual AVO

How do real AVO gathers compare with synthetics? The real gather observed at the Texas gas well and carefully processed shows the same AVO signature as the synthetic gather generated using log data from the well. Both gathers show a small negative at normal incidence that becomes more negative with offset. This signals hydrocarbons, and sure enough, the well did produce gas. The gradient and intercept are both negative, and their product positive. A section composed of product traces from every gather in the seismic line shows a zone of positive product. A second well drilled in the zone confirmed the presence of gas.

 Because the synthetic was built from log data - density, compressional velocity and shear velocity - rather than estimated values, it closely matches the observed gather. Often estimated is shear velocity, and this creates a common stumbling block to AVO modeling. Dramatic AVO effects appear in gas sands where the shear velocity is often too slow to be measured with conventional sonic tools. Introduction of the DSI tool removes this impediment.

Once the AVO signature of hydrocarbons is known, seismic data can be examined for fluids. For example, what would AVO analysis have revealed about the two bright spots - one from gas, the other from basalt?  

Carefully processed gathers from the well locations show the difference between the AVO signature of gas and that of high velocity basalt. Gas shows the now-familiar increase of amplitude with offset, while basalt shows a decrease.

AVO effects may also be tracked across a reservoir to delineate a fluid contact. A technique developed by Ed Chiburis, while at Saudi Aramco , has had remarkable success delneating Saudi Arabian oil reservoirs. In 26 of 27 cases, the technique predicted the presence or absence of oil, which was later confirmed by drilling. 

The technique identifies changes in AVO behavior along a seismic line, and associates those changes with changes in fluid compostion. Once a given fluid has been identified in a well, the AVO behavior of the gather at the well is defined as the standard to look for elsewhere in the section. 

To overcome the lack of true amplitude processing in most data, Chiburis developed a normalization technique using another reflection that shows consistent amplitiude in the section as a reference. Peak amplitudes of the target reflection in each AVO gather are picked interactively on a workstation and normalized trace by trace to the reference event. Use of a reference event removes or minimizes amplitude distortion associated with flaws in acquisition and processing.  The technique also circumvents the need for synthetics. The measured AVO response at the well serves as the standard. The major limitation of this method is that the geology and stratigraphy must be well-known in order to associate changes in AVO with changes in fluid type. 

Another clever AVO analysis technique practiced by Amoco is to display and compare seismic sections made up of partial stacks. Here, AVO information - or fluid discrimination information -maske by a full stack, is retained in a partial stack of the far offsets. A partial stack is similiar to a full stack, except that each trace is the average of trace in a small range of offets rather than all offsets in the CMP gather. 

Where is AVO going?

Some companies use AVO routinely in an attempt to reduce risk associated with potential drilling locations. Others have tried the technique and found the processing too time-consuming or too difficult. An increasing number of practitioners is insisting on quantitative agreement between synthetic and observed data before they will use the technique. Currently, most examples of AVO interpretation are qualitative.  In the Royal Oil & Gas example, the qualitative match between the two is good, but quantitatively the synthetics predict a 100% increase in amplitude with offset while the data show an increase of more than 200%.

Eliminating the discrepancy between observed and synthetic data is therefore a focus of AVO-related research , and touches on five main topics- processing , synthetic modeling, petrophysics, interpretation and inversion.

Researchers seek a true amplitude processing scheme to produce AVO data traces that can be compared quantitatively to computer-perfect synthetics. Conventional processing for structural imaging does not preserve amplitudes. Researchers are revisiting basic processing steps such as deconvolution, velocity analysis and migration with a view to AVO applications.

Current research in synthetic modeling addreses a wide range of topics. Synthetics are only as good as what goes into them. How should logs sampled every 6 inches (15 cm) be "averaged" , or blocked, to produce layered earth models? Different blocking technique produce different synthetics. What is the effect of layer thickness on AVO synthetic? The right combination of layer thickness and seismic wavelength gives rise to reverberations in the layer that alter reflected amplitude. Can seismic energy be modeled as simple rays, or is it better to use seismic wave theory? In the examples presented above, ray theory was enough. But when angles becomes large and velocity variations complex, more computer-intensive wave theory is necessary. How does velocity anisotropy affect AVO? As angle of incidence increases, differences between horizontal and vertical velocities cannot be ignored in earth models.

In general, petrophysics is the link between earth models and any seismic interpretation, but it is particularly important in AVO interpretation. Changes in porosity, mineralogy, cementation, stress, compaction or other properties that modify the velocity or density of the rock, can give rise to AVO signatures that mask fluid effects. Changes in fluid saturation, on the other hand, may exhibit no change in AVO signature. For example, in shallow or unconsolidated sands, or overpressured zones, the AVO response is about the same for all saturations. Drilling will confirm the presence of gas, but it might be just "fizz water." Laboratory and field measurements on reservoir rocks, and especially nonreservoir rocks, under in-situ conditions, are crucial to the construction of a reliable earth model. Improved understanding of rock properties at core, log and seismic scales will lead to more unambigous AVO interpretation. 

Standard AVO interpretation fits reflection amplitudes to straight line approximations of Zoeppritz prediction curves. More refined interpretations quantify goodness-of-fit or other statistical analyses of the fit. Work is under way to abandon the straight line approximation and fit the real curve. 

A great deal of research is devoted to inversion, the attempt to derive a likely earth model starting with real data - the inverse of synthetic modeling. To date, results indicate that knowledge of the Vp/Vs ratio is required for stable inversion. Sometimes Vp can be estimated from seismic stacking velocities, but Vs cannot. Full inversion of AVO data for material properties continues to intrigue researchers, but it has yet to be proven feasible. 

What is the future in AVO? One hot topic is three-dimensional (3D) AVO. Many operators have already successfully interpreted 3D seismic data sets for AVO by assembling two-dimensional (2D) AVO sections in series. Few have tried real 3D AVO, that is, considering source-receiver paths in different azimuths. This requires knowing velocity anisotropy in the horizontal plane. 

Time-lapse AVO is another topic that show promise. As a reservoir is produced, fluid contacts will move. Seismic surveys shot at different times can be analyzed for fluid changes using AVO techniques. Information about drained and undrained volumes can affect development and production plans.

Wednesday, December 20, 2017

Synthetic AVOs from Logs

In much the same way that a blindfolded expert can identify a wine and its vintage, or an X-ray diffraction lab technician can identify mineral components in a rock sample, the key to using AVO for fluid identification is comparison of real data with a standard - in this case a synthetic seismogram. This is an artificial seismic trace manufactured by assuming that some pulse travels through an earth model - rock layers of given thickness, density and velocity- and returns to be recorded. The earth model that produced the synthetic can be modified, sometimes repeatedly, until the synthetic matches the measured data, indicating the earth model is a reasonable approximation of the earth.

The densities and velocities of fluid saturated rocks necessary for the creation of synthetic traces preferably come from logs or cores. Missing data can be estimated using theoretical or empirical equations. The synthetic traces show the expected AVO effect for each fluid type.

Take, for example, the AVO effect of gas in sandstone predicted from logs in a gas field operated by Texas-based Royal Oil&Gas. Here, acoustic velocities were measured with the DSI Dipole Shear Sonic Imager tool. The seismic event of interest is the circled blue reflection corresponding to the interface between an overlying shale and the gas sand. The trace recorded at zero offset - 0 degree from vertical, directly above the reflecting point - begins with a small negative amplitude. The amplitude becomes more negative as offset increases. The AVO response to oil is the same . But when hydrocarbons are replaced with water, the AVO response changes. Now polarity becomes positive (amplitude deflection to the right), and amplitude decreases with offset. 

 The AVO effect at any interface can be quantified with the Zoeppritz formulas, and plotted as a curve. At the top of the gas sand in Royal Oil & Gas well, and for most gas sands, Zoeppritz calculations predict an increasingly negative amplitude with offset. In this case, the predicted negative amplitude increases 100%. Also shown are the Zoeppritz-predicted AVO effects for the gas-water contact deeper in the sand, and for a nonfluid, lithologic contact higher in the section.

These curves of amplitude versus angle of incidence can be used to make quantitative comparisons between synthetic predictions and amplitudes from real data, once the data have been processed for true amplitudes. This is made easier by plotting amplitude versus angle of incidence squared, which converts Zoeppritz curves to straight lines. AVO behavior can then be succintly described by the line's gradient, G, and normal incidence intercept , P.

For typical reservoir rocks, the reflection at an interface between a water-bearing layer and a hydrocarbon-bearing layer is such that a negative polarity reflection becomes more negative - intercept and gradient both negative - or a positive polarity reflection becomes more positive - intercept and gradient both positive.

The simplest indicator of hydrocarbons is therefore the product of gradient and intercept. A positive product most likely indicate oil or gas. A product trace for the Royal Oil & Gas example clearly reveals gas. G traces, P traces or product traces can be plotted next to each other to produce sections, similiar to stacked sections, for AVO interpretations.


Friday, December 15, 2017

Hydrocarbon Detection With AVO

Imagine a geophysical technique with the volumetric coverage of surface seismic that could delineate zones of gas, oil and water. In many ways, that summarizes the potential of interpreting seismic reflection amplitude variation with offset, or AVO.

In the late 1920s, the seismic reflection technique  became a key tool for the oil industry, revealing shapes of subsurface structures and indicating drilling targets. This has developed into a multibillion dollar business that is still primarily concerned with structural interpretation. But advances in data acquisition, processing and interpretation now make it possible to use seismic traces to reveal more than just reflector shape and position. Changes in the character of seismic pulses returning from a reflector can be interpreted to ascertain the depositional history of a basin, the rock type in a layer, and even the nature of the pore fluid. This last refinement, pore fluid identification, is the ultimate goal of AVO analysis.

Early practical evidence that fluids could be seen by seismic waves came from "bright spots" -streaks of unexpectedly high amplitude on seismic sections - often found to  signify gas.

Bright spots were recognized in the early 1970s as potential hydrocarbon indicators, but drillers soon learned that hydrocarbons are not the only generators of bright spots. High amplitudes from tight or hard rocks look the same as high amplitudes from hydrocarbons, once seismic traces have been processed conventionally.  Only AVO analysis, which requires special handling of the data, can distinguish lithology changes from fluid changes.

An analogy for the physics of AVO is the skipping of a stone across a pond. Everyone knows that if a stone is dropped or thrown into water directly above, it sinks instantly. But skimmed nearly horizontally, it bounces off the surface of the water. The amplitude of the bounce, which was zero at vertical incidence, increases with the angle of incidence.

Now replace the water with rubber and repeat the process. This time the vertical bounce is high, and the high-angle bounce is low. The amplitude of the bounce decreases with angle of incidence, a dramatically different behaviour from the water case.

Analogous concept applied to seismics form the basis for inferring formation properties - density and compressional and shear velocities - from seismic reflection amplitude variation with angle of incidence. And because formation density and velocity depend on the fluid saturating the formation, reflection amplitude variation also permits identifaction of pore fluid. 

Conventional treatment of seismic data, however, masks this fluid information. The problem lies with the way seismic traces are manipulated in order to enhance reflection visiblity. In a seismic survey, as changes are made in the horizontal distance between source and receiver, called offset, the angle at which a seismic wave reflects at an interface also changes. Seismic traces -recordings of transmitted and reflected sound - are sorted into pairs of source-receiver combinations that have different  offset but share a common reflection point midway between each source-receiver pair. This  collection of traces is referred to as a common midpoint (CMP) gather. In conventional seismic processing, in which the goal is to create a seismic secion for structural or stratigraphic interpretation, traces in a gather are stacked - summed to produce a single average trace.

Stacking enhances signal at the expense of noise, making reflections visible , and compress data volume. But it destroys information about amplitude variation with offset. Consider two reflections in the section : one has amplitude increasing with offset, such as in the case of the stone bouncing off the ater, and the other has amplitude decreasing with offset, similiar to the stone bouncing off rubber. Once the reflection traces are stacked, they may have identical amplitudes - they may even be bright spots -while their AVO signatures are completely different. AVO analysis can usually distinguish fluid contrasts from lithology contrast, but it requires carefully processed gathers that have not been stacked.

A litte theory

The general expressions for the reflection of compressional and shear waves at a boundary as a function of the densities and velocities of the layers in contact at the boundary are credited to Karl Zoeppritz.  Zoeppritz found that amplitude increase, decrease, or remain constant with changing angle of incidence, depending on the contrast in density, compressional velocity, Vp, and shear velocity, Vs, across the boundary.

Conventional seismic surveys deal exclusively with the reflection of compressional waves. When a compressional seismic wave arrives vertically at a horizontal interface, the amplitude of the reflected wave is proportional to the amplitude of the incoming wave, according to the normal incidence reflection coefficient. When the seismic wave arrives obliquely, the situation is more complicated. The compressional reflection coefficient is now a tortuous function of the angle of incidence, the densities, and Vp and Vs of the two layers in contact. 

Thursday, December 14, 2017

Sand Control

Sand production erodes hardware, blocks tubulars, creates downhole cavities, and must be separated and disposed of on surface. Completion methods that allow sandprone reservoirs to be exploited often severely reduce production efficiency. The challenge is to complete wells to keep formation sand in place without unduly restricting productivity. 

 Unconsolidated sandstone reservoirs with permeability of 0.5 to 8 darcies are most susceptible to sand production, which may start during first flow or later when reservoir pressure has fallen or water breaks through. Sand production strikes with varying degrees of severity, not all of which require action. The rate of sand production may decline with time at constant production conditions and is frequently associated with cleanup after stimulation. 

 Sometimes, even continuous sand production is tolerated. But this option may lead to a well becoming seriously damaged, production being killed or surface equipment being disabled. 

 Factors controlling the onset of mechanical rock failure include inherent rock strength, naturally existing earth stresses and additional stress caused by drilling or production. In totally unconsolidated formations, sand production may be triggered during the first flow of formation fluid due to drag from the fluid or gas turbulunce. This detached sand grains and carries them into the perforations. The effect grows with higher fluid viscosity and flow rate, and with high pressure differentials during drawdown. 

In better cemented rocks, sanding may be sparked by incidents in the well's productive life, for example, fluctuations in production rate, onset of water production, changes in gas/liquid ratio, reduced reservoir pressure or subsidence.

Fluctuations in the production rate affect perforation cavity stability and in some cases hamper the creation and maintenance of sand arches. An arch is a hemispherical cap of interlocking sand grains. 

Other causes of sanding include water influx, which commonly causes sand production by reducing capillary pressure between sand grains. After water breakthrough , sand particles are dislodged by flow friction. Additionaly, perforating may reduce permeability around the surface of a perforation cavity and weaken the formation. Weakened zones may then become susceptible to failure at sudden changes in flow rate.  

Predicting Sanding Potential

The completion engineer needs to know the conditions under which a well will produce sand. This is not always a straightforward task. At its simplest, sand prediction involves observing the performance of nearby offset wells.  

In exploratory wells, a sand flow test is often used to assess the formation stability. A sand flow test involves sand production being detected and measured on surface during a drillstem test (DST). Quantitative information may be acquired by gradually increasing flow rate until sand is produced, the anticipated flow capacity of the completion is reached or the maximum drawdown is achieved. A correlation may then be established between sand production, well data, and field and operational parameters.

Accurately predicting sand production potential requires detailed knowledge of the formation's mechanical strength, the in-situ earth stresses and the way the rock will fail.  Laboratory measurements on recovered cores may be used to gather rock strength data. Field techniques like microfracturing allow measurement of some far-field earth stresses. 

The earth in -situ stresses are due to many factors including the weight of the overburden, tectonic forces and pore pressure. While the vertical stresses may be estimated using bulk density logs, horizontal stresses are more problematic. Accurate estimates of horizontal stresses are integrated with logs and , using a geologic model, a continuous profile of earth stresses is created. Various geologic models have been developed to cope with the different environments encountered. Reservoir pore pressure information is also needed an this may be estimated using wireline formation testing tools or DSTs.

 Once it has been established that at planned production rates sand is likely to be produced, the next step is to choose a completion strategy to limit sanding. A first option is to treat the well with "tender loving care" , minimizing shocks to the reservoir by changing drawdown and production rate slowly and in small increments. Production rate may be reduced to ensure that drawdown is below the point at which the formation grains become detached.

Wednesday, December 13, 2017

The Sand Control Completion

Sanding is a problem in weak or unconsolidated sandstones. The objective of a sand  control completion is to eliminate sanding while maintaining a production rate that is economic, minimizes reservoir damage and thus maximizes recovery. Near the wellbore , sand movement can reduce permeability locally. Produced sand can erode downhole and surface equipment and its removal can be costly. In sufficient quantities, san can plug the completion or surface facilities. 

An objective of perforating in these highly productive and often unconsolidated sands is to reduce the near-wellbore pressure gradient during production.  There are two schools of thought on the best way do this. The established method is to perforate in a way that takes advantage of protection afforded by subsequent gravel packing. Theoritical studies show that perforation geometry can sometimes be optimized to obviate gravel packing. 

For gravel packing, many large-diameter perforations are preferred to few small holes. This is because larger holes provide a larger are open to flow and therefore less pressure drop on production. To achieve this, perforators producing large diameter holes and high shot density are used. A uniform shot distribution further reduces formation stress in addition to preserving casing strength.

To create large, clean perforation tunnels, these wells are typically shot underbalance with TCP using high shot density guns. The ideal underbalance will sufficiently clean perforation tunnels without breaking down the formation. Sand control could perhaps be provided by maintaining production rates low enough to prevent collapse of the perforation tunnel's stable arch - interlocking grains, like a keystone arch over a doorway. But such a low production rate is generally uneconomical and arches are unstable when flow conditions change. Instead, the arch is usually stabilized by filling the perforation with gravel. 


Tuesday, December 12, 2017

The Stimulated Completion

Stimulated completions fall into two categories , acidizing and hydraulic fracturing. Occasionally, the two are combined in an acid-frac, which improves productivity by using acid to etch surfaces of hydraulically induced fractures, preventing flull closure.

Success of stimulation depends largely on how well the perforation allows delivery of treatment fluids and frac pressures into the reservoir. Because these fluids and pressure induced fractures are intended to move beyond the perforation, shot phasing, density and hole diameter are of higher priority that depth of penetration. Underbalance perforating is often used because cleaner perforation tunels give fluids more direct paths to the reservoir.  In some cases, such as TCP with high shot density guns, underbalance can be increased to where stimulation is not required to improve productivity. However, stimulated reservoirs are usually of low permeability, greatly limiting the surge available to clean the perforations. Further increases in underbalance may achieve no improvement in cleaning.

 Uniformity of perforation diameter is essential to accurately determine the cumulative area of the casing entrance holes. Knowing this are and pumping pressure allows calculation of flow rate into the formation, needed to monitor progress of the stimulation. 

 A number of studies have investigated the relationship between perforation phasing and the development of hydraulic fractures. In general, hydraulic fractures propagate normal to the minimum stress in the portion of the reservoir undisturbed by the presence of the wellbore. The general conclusion is that for an ideal fracture job, perforations are aligned with the maximum stress direction, so fractures extending from the perforations will lie in the plane that has the least resistance to opening.

Monday, December 11, 2017

The natural Completion

The natural completion is often designed as that in which little or no stimulation is required for production. This approach is usually chosen for reservoirs that are less prone to damage, have good transmissibility, and are mechanically stable.

Of primary importance in selecting the perforating gun are its depth of penetration and effective shot density. Depth is important because the deeper the perforation, the greater the effective wellbore radius; also flow is less likely to be influenced by formation damaged during drilling.

Shot density also ranks high because more holes mean more places for hydrocarbon to enter the wellbore and a greater likelihood that perforations will intersect productive intervals of an anisotropic reservoir. After shot density and depth of penetration,  most important is phasing because, when properly chosen, it provides hydrocarbon with with the most direct path to the wellbore. Under typical flow conditions, perforation diameter does not adeversely affect flow once it exceeds 0.25 in (6 mm), which today is provided by nearly all guns used in natural completions.

A key consideration in perforation design of natural completions is the selection of overbalance versus underbalance perforating. Overbalance means the pressure of wellbore fluids exceeds reservoir pressure at the time of perforating. Under this condition, wellbore fluids immediately invade the perforation. For this reason, clean fluids without solids are preferred to prevent plugging of perforations. Cleanup can occur only when production begins.

Incrasingly, wells that have sufficient reservoir pressure to flow to surface unasisted are completed in underbalance conditions. Underbalance is the trend because of wider recongnition that it provides cleaner perforations- therefore better production- and because of greater availability of gun systems that allow it. Underbalance perforating can provide large gains in reservoir productivity. The question is , how much underbalance is approriate? Excessive underbalance risks mechanical damage to the completion or test string by collapsed casing or a packer that becomes damaged, stuck or unseated. It can also encourage migration of fines within the reservoir, reducing its permeability. Insufficient underbalance, however, doesn't effectively clean the perforations. Production may therefore be hindered , mainly by lack of removal of the crushed zone and , secondarily, by lack of removal of debris. The crushed zone is the damaged rock in and around the perforation tunnel; debris is mainly the liner material of the spent shaped charge, plus fragments of cement and rock.

The optimal underbalance, which removes both debris and the crushed zone and does not damage the formation, accomplishes virtually all cleanup during the portion of inital production that is dominated by surge of reservoir fluids into the perforations. Cleanup after this point is negligible because hydrocarbon follows the already cleaned paths of least resistance. During production, pressure drops across damaged area is insuficient for further cleanup. Recent experiments have shown that if a suboptimal underbalance is used, some cleanup will take place during production, but productivity never reaches that achieved with optimal underbalance.

When well testing is planned, underbalance perforating has become the standard, particulary when a drillstem test (DST) is included. Underbalance perforating is ideally suited because a DST includes hardware that allows establishing underbalance and running high shot density guns. This setup provides excellent well control and often saves time because the perforating guns are run below the test string. Pressure measurements can be recorded either downhole or in real time at surface, and are available for decision-making during the test. The MSRT MultiSensor Recorder/Transmitter and LINC Latched Inductive Coupling equipment allow real-time measurement and surface readout of downhole pressure. The main advantage of this system is the added mechanical and safety realiability of measuring pressure below the DST shut-in valve. In addition, memorized data can be read out at surface when LINC equipment is run, eliminating the need for the cable in the test string while the well is flowing.

From an operations viewpoint, underbalance perforating by wireline-conveyed guns causes a surge that lifts cable and guns. The high flow rate or liquid slugs associated with this surge can blow the guns and cable up the well. A common limit on underbalance when perforating via wireline is 700 psi, although this is often higher in tight reservoirs, which are not capable of delivering a substantial surge.

The choice of underbalance may be based on data collected since the early 1980s from laboratory and field studies and from increasing use of underbalance completions (primarily tubing-conveyed perforating). More recently, computer programs have been developed. The IMPACT integrated mechanical properties analysis & characterization of near-wellbore heterogeneity interpretation program computes a value of safe underbalance based on the mechanical properties of the formation estimated from sonic and density logs. Local experience also helps guide the selection of optimal underbalance. 

Overbalance perforating still has a role, however. Often significant are its speed for short intervals and the availibility of larger, high shot density guns compared to those for through-tubing underbalance perforation. The selection of overbalance versus underbalance rest on weighting economic versus production variables.


Friday, December 8, 2017

Perforation Strategy

Perforations form conduits into the reservoir that not only allow hydrocarbon recovery, but influence it. Each of the three main types of completions - natural, stimulated and sand control - has different perforating requirements. In the natural completion (in which perforating is followed directly by production) many deep shots are most effective. In stimulated completions - hydraulic fracturing and matrix acidizing - a small angle between shots is critical to effectively create hydraulic fractures and link perforations with new pathways in the reservoir. And in gravel packing, many large-diameter perforations effectively filled with gravel are used to keep the typically unconsolidated formation from producing sand and creating damage that would result in large pressure drops during production.

To meet the broad requirements of perforating, there many perforating guns and gun conveyance systems. Optimizing perforating requires selection of hardware best suited to the job. A good place to start, therefore, is with the basic of perforating hardware.

The Language of Perforating

There was a time when describing the perforation operation defined the perforator: running through-tubing guns, shooting casing guns or tubing-conveyed perforating (TCP).  

 The two broad categories of guns are exposed and hollow carrier guns. These can be used in two types of perforating operations : through-tubing , in which guns are run through a production or test string into larger diameter casing; and through-casing, in which guns are larger diameter and run directly into casing.

Exposed guns are run on wireline and have individual shaped charges sealed in capsules and mounted on a strip, in a tube or along wires. The detonator and detonating cord are exposed to borehole fluids. These guns are exclusively through tubing and leave debris after firing. 

Perforation-Reservoir Interactions 

Flow efficiency of a perforated completion and stimulation success are determined mainly by how well the perforation program takes advantage of the reservoir properties. The program includes determination of two main factors:

  • The proper differential between reservoir and wellbore pressure (The usual preference is for underbalance, meaning wellbore pressure is less than reservoir pressure at time of perforating).
  • Gun selection, which determines penetration tunnel length, shot phasing, shot density and perforation entrance hole diameter. The relative importance of the different components of shot geometry varies with the completion type.

The main reservoir property that affects flow efficiency is permeability anisotropy from whatever cause - in sandstone, typically from alignment of grains related to their deposition; in carbonates, typically from fractures or stylolites. Shale laminations, natural fractures and wellbore damage, which can cause permeability anisotropy, are considered separately because they are so common. In most formations, vertical permeability is lower than horizontal. In all these cases, productivity is improved by use of guns with high shoot densities. 

Natural fractures are common in many reservoirs and may provide high effective permeability even when matrix permeability is low. However, productivity of perforated completions in fractured reservoirs requires good hydraulic communication between the perforations and fracture network. To maximise the chances of intersecting a fracture, penetration length is the highest priority, with phase angle second. Shot density is less important because fractures form planes and increasing density does not increase contact with a fracture system. In fractured formations, a popular gun configuration uses 60 degree phasing with 5 spf. A Schlumberger version of this gun has a large change that penetrates 30 inch (76 cm) into the standard API test target.

An important geometric consideration of a perforation is how deeply it penetrates -whether it reaches beyond the zone damaged during drilling or connects with existing fractures. The penetration of various shaped charges is documented in surface test and in test under stress with API targets.

Penetration in surface tests is different than under stress in the well. Unconfined compressive strength of test targets is a minimum of 3300 psi, representing only low-strength reservoir rock (reservoir rock strength ranges from 0 to 25,000 psi). To estimate depth of penetration into a rock of arbitraty strength under a given stress, data measured at unstressed surface conditions have to be transformed. Because rock penetration data exist for only a few combination of charges, rock strength and stresses, a semiempirical approach is used that combines experimental data with penetration theory.

Schlumberger calculates penetration change caused by formation stress using experimental data for three generic charge designs after first calculating the change due to formation strength at zero stress. These data provide transforms implemented in the SPAN Schlumberger Perforating Analysis program.

The SPAn program consists of two modules: penetration length calculation and productivity calculation. In the penetration module, perforation length and diameter estimates are calculated under downhole conditions for any combination of gun, charge and casing size. 

Another influence on flow efficiency is formation damage, usually considered in the context of skin, and index of flow efficiency related to properties of the reservoir and completion. Skin comprises a variety of influence : flow convergence, wellbore damage, perforation damage, partial penetration (perforation of less than the toal height of reservoir) and the angle between the perforation and bedding plane. The goal is to design perforations that minimize skin and therefore maximize flow efficiency. 

Formation damage is caused by invasion of mud filtrate and cement fluid loss into the formation, creating a zone of lower effective permeability around the wellbore. Extending the perforation beyond the damaged zone may reduce this skin significantly, enhancing productivity. But even for perforations that do penetrate farther, the wellbore damage zone reduces the effective tunnel length.

During perforating, a "crushed zone" of reduced permeability is created around the perforation. In laboratory experiments, the thickness and permeability damage of the crushed zone are influenced by all variables to varying degrees: the type of shaped charge, formation type and stress, underbalance and cleanup conditions. Pucknell and Behrmann found that permeability near the perforation is reduced because microfracturing replaces larger pores with smaller ones. The current rule of thumb is to assume a crushed zone 1/2 inch (13 mm) thick with permeability reduced by 80% to 90%. 


Wednesday, December 6, 2017

Wellbore Damage Chapter 2

The interplay of oil and water in porous rock provides two remaining types of damage occuring only in the formation-wettability change and water block. In their native state, most rocks are water-wet, which is good news for oil production. The water clings to the mineral surfaces leaving the pore space available for hydrocarbon production. Oil-base mud can reverse the situation, rendering the rock surface oil-wet , pushing the water phase into the pores and impeding production. A solution is to inject mutual solvent to remove the oil-wetting phase and then water-wetting surfactants to reestablish the water wet conditions. 

Finally, water block occurs when water-base fluid flushes a hydrocarbon zone so completely that the relative permeability to oil is reduced to zero - this can occur without a wettability change. The solution is again mutual solvents and surfactants, this time to reduce interfacial tension between the fluids, and to give the oil some degree of relative permeability and a chance to move out.


Assessing the nature of the damage is difficult because direct evidence is frequently lacking. The engineer must use all available information : the well history, laboratory test data, and experience gained in previous operations in the reservoir. The initial goal, of course is selecting the treatment fluid. Later, the exact pumping schedule - volumes, rates, number of diverter stages - must be worked out.  

Since carbonate acidizing with HCL circumvents damage, the main challenge of fluid selection lies almost entirely with sandstone acidizing where damage must be removed. Laboratory testing on cores and the oil can positively ensure that a given HF-HFCL mud acid system will perform as desired- it is particularly recommended when working in a new field. These tests first examine the mineralogy of the rock to help pick the treating fluid. Then, compatibility tests, conducted between treating fluid and the oil, make sure that mixing them produces no emulsion or sludge. Finally, an acid response curve is obtained by injecting the treating fluid into a cleaned core plug, under reservoir conditions of temperature and pressure, and monitoring the resulting change in permeability.  The acid response curve indicates how treating fluid affects the rock matrix - the design engineer strives for a healthy permeability increase. 

Most treatment fluid selection ofr sandstone acidizing builds on recommendations established by McLeod in the early 1980s. The choice is between different strenghts of the HCL-HF combination and depends on formation permeability, and clay and silt content. For example, higher strengths are used  for high-permeability rock with low silt and clay content  - high strength acid in low-permeability rock can create precipitations and fines problems. Strenghts are reduced as temperature increases because the rate of reaction then increases.

Whatever their level of sophistication, acidizing models must deal with four processes simultaneously:
  • tracking of fluid stages as they are pumped down the tubing, taking into account differing hydrostatic and friction losses.
  • Movement of luids through the porous formation.
  • Dissolution of damage and/or matrix by acid.
  • Accumulation and effect of diverters.

All four phenomena are interdependent. Diverter placement depends on the injection regime ; the injection regime depends on formation permeability; formation permeability depends on acid dissolution.

 Execution and Evaluation

Sophisticated planning goes only part way to ensuring the success of a matrix acidizing  operation. Just as important is job execution and monitoring. In a study of 650 matrix acidizing jobs conducted worldwide for AGIP, stimulation expert Giovanni Paccaloni estimated that 12% were outright failure, and that 73% of these failures were due to poor field practice. Just 27% of the failures were caused by incorrect choice of fluids and additives. Success and failure were variously defined depending on the well. Matrix acidizing a previously dry exploration well was judged a success if the operation established enough production to permit a well test and possible evaluation of the reservoir. The success of a production well was more closely aligned with achieving a spesific skin improvement. 

Reasons for poor field operation centered on the technique of bullheading, in which acid is pumped into the well, pushing dirt from the tubing and whatever fluids are below the packer, often mud, directly into the formation. Bullheading can be avoided by using coiled tubing to place acid at the exact depth required, bypassing dirt and fluids already in the well. 

What helped AGIP identify and correct the failures, though, was reliable real-time monitoring of each job, particularly the tracking of skin. If skin improves with time, the job is presumably going roughly as planned and is worth continuing. If skin stops improving or gets worse, then it may be time to halt operations. The problem initially was the poor quality of field measurements, traditionnaly simple pressure charts. Then in 1983, digital field recording of well-head pressures was introduced. Today, fluid density, injection flow rates, wellhead and annulus pressures are recorded and analyzed at the wellsite. 

Three methods have been proposed to monitor skin. In 1969, McLeod and Coulter suggested analyzing the transients created before and after treatment fluid injection. The analysis was performed after job execution and therefore not intended to be a real time technique. 

Most recently, Laurent Prouvost proposed a method that takes into account the transients and can be computed in real time using the Dowell Schlumberger MATTIME system. Their method takes the measured injection flow rate and, using transient theory, computes what the injection bottomhole pressure would be if skin were fixed and constant - it is generally chosen to be zero. This is continously compared with the actual bottomhole pressure. As the two pressures converge, so it can be assumed that the well is cleaning up. Finally, the difference in pressures is used to calculate skin.

The key to real-time analysis is accurately knowing the bottomhole pressure. This can be estimated from the wellhead pressure or, if coiled tubing is used, from surface annulus pressure. The most reliable method, however, is to measure pressure downhole. This can now be achieved using a sensor package fixed to the bottom of the coiled tubing.

Evaluation should not stop once the operation is complete. The proof of the pudding is in the eating, and operators expect to recoup acidizing cost within ten to twenty days. From the ensuing production data, NODAL analysis can reveal the well's new skin. This can be compared with new predictions obtained by simulating the actual job - that is, using flow rates and pressures measured while pumping the treatment fluids- rather than the planned job. Understanding discrepancies between design and exectuion is essential for optimizing future jobs in the field. 


Monday, December 4, 2017

Wellbore Damage chapter 1

Scales, organic deposits and bacteria are three types of damage that can cause havoc anywhere, from the tubing to the gravel pack, to the formation pore space. Scales are mineral deposits that in the lower pressure and temperature of a producing well precipitate out of the formation water, forming a crust on formation rock or tubing.

With age, they become harder to remove. The treatment fluid depends on the mineral type, which may be a carbonate deposit, sulfate, chloride, an iron-based mineral, silicate or hydroxide. The key is knowing which type of scale is blocking flow. 

Reduced pressure and temperature also cause heavy organic molecules to precitate out of oil and block production. The main culprits are asphaltenes and paraffinic waxes. Both are dissolved by aromatic solvents. Far more troublesome are sludges that sometimes occur when inorganic acid reacts with certain heavy crudes. There is no known way of removing this type of damage, so care must be taken to avoid it through use of antisluding agents.

 Bacteria are most commonly a problem in injection wells, and they can exist in an amazing variety of conditions, with and without oxygen, typically doubling their population every 20 minutes or so. The result is a combination of slimes and assorted amorphous mess that blocks production. An additional reason for cleansing the well of these organism is to kill the so-called sulfate-reducing bacteria that live off sulfate ions in water either in the well or formation. Sulfate-reducing bacteria produce hydrogen sulfide that readily corrodes tubulars. Bacterial damage can be cleaned with sodium hypochlorite and it is as important to clean surface equipment, when injection water originates, as it is to clean the well and formation.

Two further types of damage can contribute to blocked flow in gravel pack and formation -silts and clays, and emulsions. Silts and clays, the target of most mud acid jobs and 90% of all matrix treatments, can originate from the mud during drilling and perforating or from the formation when dislodged during production, in which case they are termed fines. When a mud acid system is designed, it is useful to know the silt and clay composition, whatever its origin, since a wrongly composed acid can result in precipitates that block flow even more. Emulsions can develop when water and oil mix, for example when water-base mud invades oil-bearing formation. Emulsions are highly viscous and are usually removed using mutual solvents.