Sunday, October 14, 2018

Correlating Seismic and Well Data

Correlation is performed in several stages. The first is establishing geologic tops on each well using the detailed correlation module. With individual well data displayed for up to four wells simultaneously, the interpreter can correlate horizons from one well to the next, registering consistent geologic tops in every well across the field. All well data have the potential to aid in this process, with core information, petrophysical log interpretations, wireline testing results and production logs equally able to contribute to identifying significant geologic horizon. 

The next step signals the beginning of the merging of seismic and well data. In the well tie module, the 3D seismic trace at a given well is displayed versus two-way time alongside all pertinent well data, which are already converted to time using borehole seismic or check-shot data. The main purpose of this combined display is to tie events recognized on the seismic trace -seismic horizons - to the recently established geologic tops found from the well data. These ties, or links, between the two data types are crucial at several subsequent stages during the construction of the reservoir model. In addition, seismic markers found at this stage can be transferred to a seismic interpretation workstation for horizon tracking.

The first use of the tie, or link, between seismic and well data is in the velocity mapping module that enables the 3D seismic record versus time to be converted to a record versus depth. This crucial step subsequently allows the seismic data to guide the mapping of geologic horizons between wells. 

A velocity map for each layer is first assessed from the stacking velocities used in the 3D seismic processing.

Thursday, October 11, 2018

Integrated Reservoir Interpretation

Every field is unique , and not just in its geology. Size, geographical location, production history, available data, the field's role in overeall company strategy , the nature of its hydrocarbon  - all these factors determine how reservoir engineers attempt to maximize production potential. Perhaps the only commonality is that decisions are ultimately based on the best interpretation of data. For that task, there is a variability to match the fields being interpreted.

In an oil company with separate geological, geophysical and reservoir engineering departements, the interpretation of data tends to be sequential. Each discipline contributes and then hands over to the next discipline. At the end of the line, the reservoir engineer attempts to reconcile the cumulative understanding of the reservoir with its actual behavior. Isolated from the geologist and geophysicist who have already made their contributions, the reservoir engineer can play with parameters such as porosity, saturation and permeability, but is usually barred, because of practical difficulties, from adjusting the underlying reservoir geometry.

The scenario is giving way to the integrated asset team, in which all the relevant  disciplines work together, hopefully in close enough harmony that each individual's expertise can benefit from insight provided by others in the team. There is plenty of motivation for seeking this route, at any stage of a field's development. Reservoirs are so complex and the art and science of characterizing them still so convoluted, that the uncertainties in exploitation, from exploration to maturity, are generally higher that most would care to admit. 

In theory, uncertainty during the life of a field goes as follows: During exploration, uncertainty is highest. It diminishes as appraisal wells are drilled and key financial decisions have to be mad regarding expensive production facilities- for offshore fields, these typically account for around 40% of the capital outlay during the life of the field. As the field is developed, uncertainty on how to most efficiently exploit it diminishes further. By the time the field is dying, reservoir engineers understand their field perfectly.

A realistic scenario may be more like this: During exploration, uncertainty is high. But during appraisal, the need for crucial decisions may encourage tighter bounds on the reservoir's potential than are justifiable. Later, as the field is developed, production fails to match expectations, and more data, for example  3D seismic data, have to be acquired to plug holes in the reservoir understanding. Uncertainties begin to increase rather than diminish. They may even remain high as parts of the field become unproducible due to water breakthrough and as reservoir engineers still struggle to fathom the field's intricacies.

Asset teams go a long way toward maximizing understanding of the reservoir and placing a realistic uncertainty on reservoir behaviour. They are the best bet for making most sense of the available data. What they may lack, however, are the right tools. Today, interpretation is mainly performed on workstations with the raw and interpreted data paraded in its full multidimensionality on the monitor. Occasionally, hard-copy output is still the preferred medium - for example, logs taped to walls for correlating large numbers of wells. 

There are workstation packages for 3D seismic interpretation, for mapping, for viewing different parts of the reservoir in three-dimensions, for petrophysical interpretation in wells, for performing geostatistical modeling in unsampled areas of the reservoir, for creating a grid for simulation , for simulating reservoir behavior, and more.  But for the reservoir manager, these fragmented offerings lack cohesion. In a perceived absence of an integrated reservoir management package, many oil companies pick different packages for each spesific application and then connect them serially.

Any number of combinations is possible. The choice depends on oil company preferences, the history of the field and t he problem being adressed. Modeling a mature epehant field in the Middle East with hundreds of wells and poor seismic data may require a different selection of tools than a newly discovered field having high-quality 3D seismic coverage and a handful of appraisal wells. Reservoir management problems vary from replanning an injection strategy for mature fields, to selecting horizontal well for optimum recovery , to simply estimating the reserves in a new discovery about to be exploited.

Whatever the scenario, the tactic of stringing together diverse packages creates several problems. First is data compatibility. Since the industry has yet to firm up a definitive geoscience data model, each package is likely to accept and output data in slightly different ways. This forces a certain amount of data translation as the interpretation moves forward-indeed, a small industry has emerged to perform such translation. Second, the data management system supporting this fragmented activity must somehow keep track of the interpretation as it evolves. Ideally, the reservoir manager needs to know the history of the project, who made what changes, and if ncessary how to backtrack. 

Postprocessing Seismic Data

The interpretation path obviously depends on the data available. For two of the three fields considered here, there were excellent 3D seismic data. And in all three fields, there was at least one well with a borehole seismic survey. The first goal in working with seismic data is to ensure that the borehole seismic and the surface seismic at the borehole trajectory look as similiar as possible. If that is achieved, then the surface seismic can be tightly linked to events at the borehole and subsequently used to correlate structure and evaluate properties between wells. If no borehole seismic data are avaialble, an alternative is to use synthetic seismograms, computed from acoustic and density logs.

Differences in seismic data arise because of difficulties in achieving a zero phase response, a preferred format for displaying seismic results in which each peak on the trace corresponds exactly to an acoustic impedance contrast, and , by inference, geologic interface. Processing seismic data to obtain a zero-phase response depends on accurately assessing the signature of the acoustic source. This is straightforward in borehole seismics because t he measured signal can be split into its downgoing and upgoing componentss, and the former yields the source signature. In surface seismics, the downgoing field is unmeasured and statistical techniques must be used to assess the signature, leading to less reliable results.  In Conoco Indonesia, Inc.'s field, the surface seismic and borehole seismic data initially matced poorly. With the residual processing module, the mismatch is resolved by comparing the frequency spectra of the two data sets and designing a filter to pull the surface seismic data into line with the borehole seismic data. In this case, the postmatch alignment is excellent.

 However, if the alignment resulting from this treatment remains poor, it may prove necessary to vary the design of the filter versus two-way time. This is achieved by contructing filters for several specific time intervals along the well trajectory and then interpolating between them to obtain a representative filter at any two-way time. 

 The next step is to perform a seismic inversion on the matced seismic data, using the inversion module. This process converts the matced seismic data to acoustic impedance, defined as the product of rock density and acoustic velocity. Acoustic impedance can be used to classify lithology and fluid type. Mapped across a section in two dimensions or throughout space in three dimensions, acoustic impedance provides a valuable stratigraphic correlation tool. For Conoco Indonesia, Inc., inversion provided valuable insight into the lateral extent of the reservoir. 

The inversion computation requires the full spectrum of acoustic frequencies. Very low frequencies are missing from the seismic record, so these are estimated interactively from acoustic well logs. Between wells, this low-frequency information is interpolated  , and for a 3D inversion, the information must be mapped everywhere in reservoir space.


Wednesday, October 3, 2018

Beating the Exploration Schedule With Integrated Data Interpretation

Oil companies have made great strides in improving the success rate of exploration and production, mainly by using seismic data to guide well placement. But data gathering and interpretation do not stop here. Once a well is spudded, the company commits considerable resources to gather more data such as mud logs, cores, measurements-while-drilling (MWD) information and wireline logs. This creates a huge volme of data - often at different scales and qualities - that must be efficiently handled to ensure maximum return on the exploration investment. 

Communications hardware already allows large volumes of information to be moved rapidly from one computer to another  even from remote locations. And data management and universal data standards are gradually being introduced throughout the industry to facilitate data access.

To take full advantage of this new environment, however, geoscientist need their interpretation applications integrated into a common system that allows data sharing.

Until recently, no attempt was made to standarize the data format used by interpretation software packages, which meant that they could not communicate with each other or use a common database. More time was spent on converting and loading data than on interpretation. Applicatons often ran in series rather than parallel, introducing further delays. This resulted in drilling or completion decisions being made using incomplete answers while full interpretations took weeks or months.


Friday, August 10, 2018

Geophysical Interpretation: From Bits and Bytes to the Big Picture

Well logs measure reservoir properties at interval of a few inches, providing a high density of information mostly in the vertical direction. But the volume of reservoir sampled by logs represents only one part in billions. Seismic data, on the other hand, cover the overwhelming majority of reservoir volume but at lower vertical resolution. A processed three-dimensional (3D) seismic survey may contain a billion data points sampling a couple of trillion m3 and some surveys are 10 times bigger. The geophysical interpreter must handle this massive amount of information quickly and produce a clear 3D picture of the reservoir that can guide reservoir management decisions.

In the overall seismic scheme, interpretation builds upon the preceding work of acquisition and processing. Fast new ways to simultaneously visualize and interpret in three dimensions are changing how interpreters interact with geophysical data. Seismic interpretation packages band together a collection of tools designed to simplify seismic interpretation and smooth the road from input to output. GeoQuest's seismic interpretation tools -Charisma, IESX systems - offer a variety of levels of user-friendliness and sophistication. These packages complete the process in roughly four steps-data loading, interpretation, time-to-depth conversion, and map output. This article takes a look at how they help the geophysical interpreter harness a seismic workstation filled with a billion data points -and make it fun.

 Getting Data in the Right Place

By the time 3D data arrive at the interpretation workstation, they have already undergone numerous quality control checks, and are ready to be loaded. The objective in data loading is to ensure that as much of the available data as possible is loaded onto the computer, and that these data points are correctly positioned. Data loading continues to be simplified by software advances.

Fitting all the data onto the computer has been difficult because disk space has been expensive. To work around the problem, most data loading routines convert seismic traces from SEG-Y format to a compressed workstation format. This compression can be perilous, because it reduces dynamic range of the trace data. SEG-Y data are usually represented in 32-bit floating point format, which allows a range of +/- 10 ^37 . Data in 16 bit format have a range of +/- 32,768. Converting data from 32-bit to 8-bit reduces computer storage requirements by a factor of four, but also reduces dynamic range. Reducing dynamic range may negate much of the care and money that went into acquisition and processing of the seismic data. Although the dynamic range of compressed data is usually more that the human eye can perceive, computer-driven interpretation can be made to take advantage of 32-bit data. Some specialist recommend that data never be compressed, and since disk space is becoming less expensive, that will eventually become a more widespread option.

When compression is necessary, workstations can help the interpreter do it intelligently through scaling. Scaling ensures that data amplitudes are properly sized so that the most important information is preserved when trace values are converted from SEG-Y format to compressed format. In the Charisma system, scaling must be user-controlled and different scale factors can be tested; this allows flexibility, but usually requires practice. In the IESX systems, scaling is done automatically, trace by trace. The scaling factor is stored in the header of each trace. The factor is reapplied to the each trace it is read from the data base. This results in a reconstructed 32-bit seismic section, regardless of the storage format. 

Loading seismic data in the righ place in the computer involves assigning a geographic location to each trace. For 3D data this is simpler than for 2D: inputs are the spatial origin and orientation of the volume, the order and spacing of the shot lines, and the trace spacing. From these few numbers, geographic coordinates for each of thousands or millions of traces can be computed. 

If there are older 2D or 3D data, or offset seismic profiles (OSPs) to be interpreted with the currently loaded 3D survey, data loading becomes more complicated. Trace locations for each 2D line or OSP must be accessed from separate navigation files or from the trace header themselves. Data of different vintages, amplitudes and processing chains must also be reconciled. This is not a trivial task, but is greatly eased with today's workstations. 

Additional data that can be loaded include well locations, well deviation surveys, log data, formation tops, stacking velocities from seismic processing, time-depth data from well seismic surveys and cultural or geographic data such as lease boundaries or coastlines. 

In 3D surveys, the seismic lines shot during the survey are called inline sections or rows, Vertical slices perpendicular to these , called crossline sections or columns, can be generated from the inline data . In 3D land surveys, the acquisition geometry can be more complicated than marine surveys, but usually the inline direction is taken to be aling receiver lines. In both cases, horizontal slices cut a constant time are called time slices. 

The way seismic data are stored by different systems affect the time required to generate new sections and display or perform other poststack processing. In the Charisma and IES systems, inline sections, crossline sections and time slices are stored separately, so a single data value may be stored up to three times. In the IESX system, every inline trace is stored only once, decreasing data storage volume. In such a volume there is no need to generate crosslines because arbitrary vertical sections may be cut in any orientation in real time. Horizontal seismic data are stored in a separate file.

Until recently, 3D data loading routines were not user friendly, often requiring a computer specialist. But new applications are beginning to make this step more straightforward, allowing interpreters to load their data alone or with support over the telephone. However, most companies still employ dedicated data loaders, or use contract workers.

Tracking Continuities and Discontinuities

Now we come to the real interpretation part of the job- identifying the reservoir interval and marking, either manually or automatically, important layer interfaces above, with and below it. The interfaces, called horizons, are reflections that signify boundaries between two materials of different acoustic properties. Interpretation also includes identifying faults, salt domes and erosional surfaces that cut horizons.

Some interpreters first pick horizons as far as possible horizontally on a set of vertical sections, the outline faults. Other interpreters pick faults first, then pick horizons up to their intersections with faults. The choice depends on personal preference and experience. Horizon shallower than the reservoir should be interpreted because they affect horizons below. Interpretation of horizons outside the reservoir interval is important if they correspond to regional markers that can be picked from logs. Interpreting several horizons that bracket the target zone may also be used to enhance time-to-depth conversion and give clues to geologic history.

Knowing which horizons correspond to the reservoir comes from previous experience in the area, such as earlier 2D seismic lines. This is usually accomplished by tying 3D data to an existing 2D line or well. Tying a seismic line to a well is done by comparing an expected seismic trace at the well with real seismic data. This is achieved with synthetic seismograms. To create a synthetic, the sonic and density logs are converted to time, often by using a check-shot survey. Next, the sonic and density logs are combined to give an acoustic impedance log- the product of velocity and density. Then, through an operation called convolution , a pulse trace that mimics the seismic source is used to change the acoustic impedance log into a synthetic seismic trace.

Now it's time to compare the synthetic with the seismic data at the well. Geologic boundaries , such as top of the reservoir, are identified in the original logs. The boundaries are then correlated with the time-converted logs, acoustic impedance log and then the synthetic seismogram. Waveform charaacteristics of the synthetic are compared with the real seismic trace to determine the seismic representation and travel time to the geologic boundaries at the well location. However, at seismic wavelengths -50 to 300 ft -what appears to be one layer in the seismic section will normally be several layers in the logs. A main use, then, of tracking horizons in seismic data is not to distinguish thin layers, but to provide information about the continuity and geometry of reflectors to guide mapping of layer properties between wells. 

To track a horizon, trace characteristics are followed horizontally across the whole seismic survey. Common characteristics used to track an event are the polarity or change in polarity of the trace. At any time, a trace will be of either negative or positive polarity, or a zero crossing. A positive polarity reflection, or peak, indicates an increase in acoustic impedance, while a negative polarity reflection or trough, indicates a decrease in acoustic impedance. A zero crossing is a point of no amplitude, usually between a negative and positive portion of a seismic trace. The amplitude of the peaks and troughs is usually color coded. A wide range of color schemes allows interpreters to accent features to be tracked.

A horizon may be tracked in a variety of ways. Points on the horizon may be manually picked by clicking with the mouse on a visual display of a vertical section. If the seismic signal is sufficiently continous, the horizon may be tracked automatically using a toll called an autotracker. Autotracking  requires the interrpeter to specify the signal characteristics of the horizon to be tracked. These include polarity, a range of amplitude and a maximum time window in which to look for such a signal. Given a few seed points, or handpicked clues, autotrackers can pick a horizon along a single seismic line or through the entire data volume. In faulted areas, autotrackers can usually be used if seed points are picked in every fault block. Horizons picked with autotrackers must be quality checked manually and may require editing by an interpreter. Still, the time savings is huge compared to manually picking thousands of lines.

If the horizon is difficult to follow, the data can be manipulated using processing applications available within most interpretation systems. The charisma processing toolbox, for example, includes a variety of filters and other options to produce data that are easier to interpret, without expensive reprocessing. Dip filters suppress noise outside a specified dip range and highpass filters can reveal discontinuities. Other processes include deconvolution to extract an ideal impulse response from real data, time shifts to align traces, polarity reversals and phase rotations to match data with different processing histories, scaling to boost amplitudes of deep reflections, and time varying filters to compensate for wave attenuation.

Some horizons defy reprocessing efforts, and remain too complex to track with  conventional autotrackers. Three examples are: (1) reflections that change polarity along the horizon in response to a lateral change in lithology or fluid content; (2) a local minimum that is positive or a local minimum that is positive or a local maximum that is negative; and (3) horizons that are laterraly discontinuous. Surfaceslice volume interpretation helps track these tricky horizons by displaying what might be thought of as "thick" time slices. The SurfaceSlice application was developed at Exxon Production Research and has been incorporated into GeoQuest's IESX system. 

The surfaceslice method can be thought of as scanning the 3D cube to create a new seismic volume that contains only samples that meet some criteria set by the interpreter, such as local troughs with a given amplitude range. Thick slices through the volume are displayed in a chosen color scheme. The slices contain only data on the t ypes of horizons of interest. SurfaceSlice slices resemble a series of contour maps, and are therefore convenient for geologist to interpret. Slice thickness is interactively controlled by the interpreter , and is usually chosen to be less than the wavelength of the reflection in order to stay on the chosen horizon. Multiple windows show a series of slices at increasing times in which the horizon can be rapidly tracked in areal swaths rather than line by line. 

Once picked, either manually, by autotracking or by surfaceslice analysis, the horizon serves multiple purposes. Shallow horizons can be flattened to give a rendition of the underlying volume at the time of their deposition. A horizon, really a set of time values draped on a grid of trace locations, may be linked to a formation marker identified in well logs. If the marker has been picked in several wells, this serves as a consistency check on the seismic interpretation. This link may be used later for time-depth conversion or for extending formation properties away from wells. 

Faults and other discontinuities may be picked manually with the mouse in two ways. As in 2D interpretation, classic fault interpretation is done on vertical sections - either inline, crossline or other sections retrieved at any desired azimuth. A fault picked on one section can be projected onto nearby sections to give the interpreter an idea where to look for the next fault pick. Thrust fault and high-angle structures such as salt domes require special handling, because a given horizontal location may have multiple vertical values. A new way of picking faults, made possible by 3D workstations, allows the interpreter to identify faults from discontinuities in time slices. 

Another interpretation technique that takes advantage of the 3D nature of data storage is called atribute analysis. Every seismic trace has characteristics, or attributes, that can be quantified, mapped and analyzed at the level of the horizon. And through mapping a horizon is based more or less on the continuity of the seismic reflection, attributes can vary in many ways along the horizon. Traditional trace attributes include the amplitude of the reflection, its polarity, phase and frequency.  These trace atributes were introduced years ago to highlight continuities and discontinuities in 2D seismic section. Now , with the addition of high-speed 3D workstations, interpreters have the freedom to explore new types of attributes. Attributes such as dip and azimuth of horizons can instantly reveal discontinuities and faults that could take weeks to interpret manually. Interpreters are also using attributes to apply attributes to apply sequence stratigraphy to 3D data. 

The reservoir takes shape

An advantage of 3D workstations is their speed  compared to a pencil-and-paper job; autotrackers lift some of the workload from interpreters, letting them do more in less time. Other advantages , such as time slices, surfaceslice displays and attribute maps, are techniques made possible because the data reside in 3D on a workstation. But the seismic sections are still 2D representations of 3D information, and interpreters still perform quantitative interpretation of 2D.

This is changing as more interpreters use the full 3D-visualization capabilities of new workstations. The ability to see the data volume, to zoom and change perspective, gives interpreters new insight into the features they interpret on horizons. Proper illmination makes surfaces easier to understand. Changing the light source to a grazing elevation can highlight subtle features such as faults and fractures, for the same reason that the best aerial photos of the earth's surface are shot in early morning or late afternoon to maximize shadows. More advanced workstations allow interpreters to illuminate horizons with lights from different locations and change the reflective properties of surfaces. Interpreters can spend less time figuring out what the structure is, and more time understanding how it can affect development decisions. A rainbow colored contour map, once a marvel of the seismic screen, pales next to a 3D rendering of the same surface. 

Structures that appear obscure or disconnected when examined in 2D seismic views may become clear or continuous in 3D. Or just as importantly, features that appear connected in one perspective may be disjointed in another. Seismic properties between two deviated wells, either existing or proposed, can be examined by extracting the seismic image on the twisted plane between them. This gives reservoir planners a toll for verifying reservoir connectivity, whether for exploration purposes or for planning improved recovery campaigns. Well logs, interpreted horizons, faults and other structures can be viewed and moved, alone, or along with the seismic data. 

Today, the most powerful 3D visualization products provide real-time interaction with the 3D image for lighting, shading , rotation and transparency. However, interaction with the image for creating and editing interpretation has typically been limited.

Time-to-Depth Conversion

Once horizons and structures are interpreted in time, the next step is to convert the interpretation to depth. The relationship between time and depth is velocity, so a velocity model is needed. Different workstation systems exhibit varying degrees of sophistication in their creation of velocity models for time-to-depth conversion. Most systems offer simple geometrical conversions based on velocity models that may vary vertically and horizontally. These convert points from time to depth by moving them in straight vertical lines. The Charisma DepthMap package includes geophysical modeling in the form of seismic ray tracing and permits lateral translation of points ot perform time-to-depth conversion with increasing reliability.

If more than one horizon is to be converted to depth, an average velocity to each horizon must be estimated, or the average velocity to the shallowest horizon and the velocity between each horizon down to the target horizon.

In the absence of logs or well seismic surveys, seismic stacking velocities can subtitute for average vertical velocities. Stacking velocities are derived from seismic data during processing, and used to combine seismic traces to produce data that are easier to interpret. They contain large components of horizontal velocity and are usually available at 500-m to 1-km spacing across the survey area. These data are interpolated to the same sample interval as the seismic time horizon grid. Then the velocity grid is multiplied by the time grid to give a depth grid. The key limitation of stacking velocities is their lack of accuracy, especially in regions of complex velocity or of complex structure. 

Time-depth data from a check-shot survey give an accurate vertical velocity model, but only at the check-shot location. In the absence of other data, this velocity can be used uniformly across the field to convert the seismic times to depth. Stacking velocities can be calibrated at the well using check-shot surveys.

A synthetic seismogram built from sonic and density logs can provide a comparison trace for time-to-depth conversion. Disadvantages of this technique are the limited extent of logs- most logs do not provide information all the way to the surface -and the discrepancy between velocities measured at sonic frequencies. Synthetics are most useful when calibrated with a check-shot survey, which improves the time-to-depth conversion.

Velocity models and images from VSPs are the most powerful data for converting surface seismic times to depth. VSPs sample velocities at more depths than check shots, and unlike synthetic seismograms created from sonic logs, VSPs have a frequency content similiar to that of surface seismic waves. And above all, VSPs provide images that can be matched directly to surface seismic sections. 

Putting it all on the map

Once data about reservoir structures are stored, 2D and 3D map images can be generated for reservoir characterization. Surfaces may be mapped in time, or , if there is a velocity model, in depth. Basic mapping tools for this reside within most seismic interpretation packages, and there are also separate, stand-alone mapping packages that accept seismic interpretations for map generation.


Tuesday, July 31, 2018

Questioning the Way We Drill

Since June 1993, a team of engineers from drilling contractor Sedco Forex has been reviewing a range of issues, from the organization of drilling activities to optimum rig design. By looking at the whole picture, the team aims to develop an efficient drilling rig with a minimal environmental impact that may be operated by fewer, aand more skilled people.

 The cast normally required to drill and complete a well involves a host of different companies. Each subcontractor comes to the rig site with dedicated equipment and personnel, requiring mobilization, demobilization and accomodation for all those involved. Therefore, the key to streamline a rig is to reorganize operations, change traditional practises and job descriptions, and reallocate responsiblities. After this, packages and tasks may be restructured and amalgamated.

In the past, oil companies tended to control the hiring and managing of contractors and subcontractors. However, recent reappraisals of their core activities by a number of oil companies have led to a change in the way some wells are drilled. For example, in what are called integrated contracts, the role of organizing the subcontractors may be devolved by the oil company to one of the contractors.

Integrated contracts cut down on the need for the operator to supervise subcontracts, and improve synergy between the participating companies. However, in their reappraisal of the drilling process, the Sedco Forex drilling engineers have taken a more radical approach. For example, all mud functions may be integrated so that mud mixing and cleaning, and cutting disposal become the responsibilities of a single organization - as opposed to today arrangement of splitting the work between the mud engineer, the rig crew and several specialist ( to provide extra cutting-cleaning equipment, for example).

But with the aim of creating a fully integrated rig team, the proposals cut even more deeply through the divisions created by traditional service company demarcations. 

The team has assembled a set of reorganization proposals that may be adapted to specific client programs.
  • Merging cement and mud activities - By handing over responsibility for mud and cement to a single fluids engineer, personnel and equipment may be shared.
  • Improving the synergy between mud engineering and mud logging- Having the mud logging engineer - or geologger - share an office/laboratory with the fluids engineer encourages cooperation and ensures consistency in fluids and formation information.
  • Integrating the measurement-while-drilling (MWD) and wireline logging functions- The same personnel may run all directional, density, resistivity and porosity logs whether logging while drilling or on wireline. However, more sophisticated wireline logs would probably involve a specialist logging engineer. The driller and the drilling crew may assist in running the wireline tools using a logging unit that is an integral parto of the rig.
  • Combining the driller and directional driller's job- Giving the driller responsibility for directional drilling means that there is constant directional monitoring throught drilling.
  • Implementing a single maintenance team- Having all the equipment maintained by a dedicated maintenance crew results in improved consistency and better planning.
  • Cross training of all rig personnel- Increasing the skills of all members of the work force will improve synergy between activities. Not only is the driller trained to carry out directional drilling duties, but also the assistant driller and floor crew should be able to assist in logging, cementing and mud logging operations.

 If all these  changes are enacted, the crew needed to drill a well may be cut by approximately 30%. Support facilities such as accomodation and catering may also be downsized accordingly.

"What kind of wells are needed?"

Traditionally, wells have generally been completed using a casing/liner with a diameter of 6 in. [15 cm] or greater. However, an increasing number of slimhole wells is being drilled. Although there is no real standard for defining slimhole wells, they are typically completed with 3 1/2 -in. casing in 4 3/4-in diameter open hole.


Friday, June 8, 2018

Case Study Conformance Control

The Wertz field was a model implementation of a CO2 tertiary flood, and ,as a result, field performance had been copiously documented. Not only were individual producers and injectors monitored daily, but flow rates of the three phases present - oil, water and Co2 were also measured. These measurements were made in special substations, one substation for every dozen wells or so, each with elaborate and automatic appratuses for sampling each well flow in or out and the flow's breakdown into three phases. 

The Wertz producing formation is a 470-ft thick aeolian sandstone at an average depth of 6200 ft, with 240 ft of net pay  having 10% porosity and 13 md permeability. The formation is believed to have some fractures and is oil wet. Sixty-five wells over 1600 acres are used for production and many more that that have been drilled for injection-alternating water and CO2 injection, commonly referred to as water-alternating-gas (WAG) injection. By mid 1991, the field fate literally hung in the balance. The field's total production had dropped precipitously to 4000 BOPD from 12,000 BOPD in 1988, a steeper than expected decline during tertiary flooding. 

After trying several other techniques to halt the decline, Amoco turned to conformance control, eventually completing 12 treatments using Marathon's polymer gel technology. Ten treatments were in injectors and two in producers. Some treatments were aimed at blocking matrix porosity and some aimed to place gel in reservoir fractures. We'll highlight one example of each, illustrating with injector treatments since these were the more successful. In some cases, the treatments  extended the life of a pattern by two years. Overall, Amoco estimates that for a total cost of $936,000, the treatments have yielded an increase in producible reserves of 735,000 barrels.

 A crucial preliminary step in all these treatments was candidate selection- the compilation and review of data to determine a well's suitability for treatment. Although any field information could be relevant, five data types were deemed particulary imporant. They were:
  • Pattern reserves: If the pattern reserve data indicated that secondary and tertiary flooding had pushed out most of the oil, there was no reason to try further production enhancement with conformance control.
  •  Historical fluid-injection conformance. If an injection well historically showed a poor injection profile, the corresponding pattern was obviously a candidate for conformance improvement. In the Wertz field, Amoco used radioactive tracer surveys to log injection profiles.
  •  Three phase offset production data. If producing wells in a pattern showed a cyclic water and Co2 production that correlated with cycles in the nearby injection well, then it was likely this communication was through an unusally high-permeability channel. The pattern therefore required conformance control.
  • Breakthrough time during the cyclic correlation -essentially the time for water or Co2 to travel between injector and neighboring producer. This helped estimate the size of treatments designed to fill the fracture space between the wells.
  • Well history information- specifally the history of all previous attempts to improve conformance in the well, and why they did or did not work. This information prevented unnecessary workover expense.

Wednesday, May 23, 2018

Pushing Out the Oil with Conformance Control

The growing problem of water production and a stricter environmental enforcement on water disposal are forcing oil companies to reconsider conformance control - the manipulation of a reservoir's external fluid drive to push out more oil and less water. The technical challenges range from polymer chemistry to detailed knowledge of reservoir behavior. 

By late 1984, after several years' research, Marathon Oil Company laboratories in Littleton , Colorado, USA established a new polymer-gel system to block high-permeability channels within a reservoir and improve oil recovery. Previous attempts using less sophisticated chemistry had failed because the chemicals had become unstable at reservoir conditions and also were partially toxic. During the next three years, Marathon performed 29 treatments with the new system in nine of its fields in Wyoming's Big Horn basin. Fourteen treatments were in carbonate formations, and 15 were in sandstones.

The greatest success occured when injection wells were treated. The Big Horn reservoirs are known to be naturally fractured and the injected polymer-gel system most likely filled much of the fracture system between injector and neighboring producer.  This would force subsequent water drive to enter the matrix rock or fractures untouched by the treatment and push out oil. In many cases, a declining production in the neighboring producer was dramatically reversed, staying that way for several years.

Overall, the 29 treatments yielded 3.7 million barrels more oil than if the treatment had never been done, at a total cost of just $0.34 per barrel. Considering the price of oil at the time ranged from $30 to $24, Marathon had got themselves some very inexpensive production and a clear signal that the age of conformace control had begun

What is Conformance Control?

In the context of a reservoir produced with some kind of external fluid drive, conformance describes the extent to which the drive uniformly sweeps the hydrocarbon toward the producing wells. A perfectly conforming drive provides a uniform sweep across the entire reservoir; an imperfectly conforming drive leaves unswept pockets of hydrocarbon. Conformance control describes any technique that brings the drive closer to the perfectly conforming condition- in other words, any technique that somehow encourages the drive mechanism to mobilize rather than avoid those hard-to-move pockets of unswept oil and gas.

In the pantheon of techniques to improve oil recovery, conformance control is relatively unambitious, its goal being simply to improve macroscopic sweep efficiency. Most enhanced oil recovery (EOR) techniques, for example, also strive to improve microscopic displacement efficiency using a variety of surfactants and other chemicals to prize away hydrocarbon stuck to the rock surface. Conformance control is also less expensive than most EOR techniques because the treatments are better targeted and logistically far smaller.

Another factor also favors conformance control. By redistributing a waterdrive so it sweeps the reservoir evenly, water cut is often dramatically reduced. For many mature reservoirs, treatment and disposal of produced water dominate production costs, so less water is good. Environmental regulations also push oil companies to reduce water production. In the North Sea, residual oil in produced water dumped into the ocean is restricted to 40 ppm, an upper limit increasingly under pressure from the european community. 

Conformance control during waterflooding covers any technique designed to reduce water production and redistribute waterdrive, either near the wellbore or deep in the reservoir. Near the wellbore, these techniques include unsophisticated expedients such as setting a bridge plug to isolate part of a well, dumping sand or cement in a well to shut off the bottom perforations, and cement squeezing to correct channeling and fill near-well fractures. Deep in the reservoir, water diversion needs chemical treatment.

Initially, straight injection of polymer was tried but proved uneconomical because of the large volumes required to alter reservoir behavior and because polymers tend to get washed out. The current trend is gels, which if correctly placed can do the job more efficiently with much smaller volumes. In the future, potentially less expensive foams including foamed gel may be tried. Ultimately, reducing water production may require a new well. The choice of technique or combination techniques depends crucially on the reservoir and its production history.

Take, for example , the caase of two producing zones separated by an impermeable shale, in which the bottom zone has watered out. The first solution is to cement in the bottom zone. Suppose, though, that the shale barrier does not extend to the producing well. Then success with the cement plug becomes short-lived and water soon starts coning toward the top interval.  The only recourse now is to inject a permeability blocker- some kind of gelling system- deep into the lower zone. The trick is not letting the gelling system invade the upper zone. This can be achieved by pumping through coiled tubing to the top of watered-out zone while simultaneously pumping an inert fluid, water or diesel fuel through the annulus into the upper zone to prevent upward migration of the gelling system. 

Deep gelling systems are also the answer for a high-permeability but watered-out formation sandwiched between two lower permeability formations.  A casing patch or cement squeeze may halt water production momentarily, but long-term shutoff requires a deeper block. The fractured reservoir is a variant of this scenario. If natural fractures are interconected, they can provide a ready conduit for water breakthrough, leaving oil in the matrix trapped and unproducible. The solution is to inject and fill the fractures with a gelling system, that once gelled, forces injection water into the matrix to drive the oil out. 

 BP Exploration and ARCO are currently testing a system comprising PHPA and an aluminum-based cross-linker that is hoped will reach deep in the matrix reservoir of the Kuparuk field in Northern Alaska. The cross-linker is another metal-carboxylate complex, aluminum citrate. But unlike chromium acetate, this links the PHPA in two distinct temperature-controlled stages.In the first stages which occurs rapidly in cold water, each aluminum citrate molecule bonds to just one polymer carboxylate site. In the second stage, which occurs only above 50 degree celcius, the aluminum citrate complex can attach to a second carboxylate group thereby cross-linking two polymer molecules and contributing to produce a gel network. Because the cross-link itself contains carboxylate groups and these have an affinity for water molecules, the formed gel may flow in a beaker, yet provide an adequate permeability block in porous rock.

BP and ARCO's strategy is to pump the system into the reservoir through injection wells, where the cooler temperature of the injection water will promote only the first stage reaction, resulting in a pumpable fluid of low viscosity. Then, as the fluid permeates deep into high-permeability sections of the reservoir and experiences higher temperatures, the second-stage will kick in and enough of a gel will form to divert water-drive to less permeable zones.

  An alternative gelling system that guarantees injectability into matrix rock uses simple inorganic chemicals that have flowing properties nearly identical to those of water. Inorganic gels were discovered in the 1920s and are used to this day for plugging lost circulation, zone squeezing and consolidating weak formations. Their failing for conformance control has been a very rapid gelation time, but recent innovations using aluminum rather than silicon have resolved this problem.

 Besides their inherent ability to deeply permeate matrix rock, inorganic gels have another advantage over their polymer-based cousins. If the treatment fluid gets incorrectly placed causing a deterioration in reservoir performance, inorganic gel can be removed with acid. Of course, the acid has to be able to reach the gel to be able to remove it. Polymer gels, on the other hand, cannot be dismantled easily and are therefore usually in place for the duration. 

 If deep penetration in matrix is one key factor in the conformance control debate, another concern is contamination of the gelling system through contact with ions in the formation water. As noted, the DGS system may be adversely affected by divalent anions. PHPA, on the other hand, both before and after gelling may be affected by divalent cations such as Ca2+, which are relatively ubiquitous in formation waters.