Wednesday, April 25, 2018

Corrosion in the Oil Industry

Most metals exist in nature as stable ores of oxides, carbonates or sulfides. Refining them, to make them useful, requires energy. Corrosion is simply nature's way of reversing an unnatural process back to a lower energy state. Preventing corrosion is vital in every step in the production of oil and gas. Corrosion cost US industries alone an estimated $170 billion a year. The oil industry, with its complex and demanding production techniques, and the environmental threat should components fail, takes an above average share of these costs.



Corrosion - the deterioration of a metal or its properties -attacks every component at every stage in the life of every oil and gas field. From casing strings to production platforms, from drilling through to abandonment, corrosion is an adversary worthy of all the high technology and research we can throw at it.

Oxygen, which plays such an important role in corrosion, is not normally present in producing formations. It is only at the drilling stage that oxgyen-contaminated fluids are first introduced. Drilling muds, left untreated, will corrode not only well casing, but also drilling equipment, pipelines and mud handling equipment. Water and carbon dioxide-produced or injected for secondary recovery-can cause severe corrosion of completion strings.  Acid-used to reduce formation damage around the well or to remove scale-readily attacks metal. Completions and surface pipelines can be eroded away by high production velocities or blasted by formation sand. Hydrogen sulfied [H2S] poses other problems. Handling all these corrosion situations, with the added complications of high temperatures, pressures and stresses involved in drilling or production, requires the expertise of a corrosion engineer, an increasingly key figure in the industry. 

Because it is almost impossible to prevent corrosion, it is becoming more apparent that controlling the corrosion reate may be the most economical solution. Corrosion engineers are therefore increasingly involved in estimating the cost of their solutions to corrosion prevention and estimating the useful life of equipment.  

Production wells were completed using 7-in. L-80 grade carbon steel tubing-an H2S-resistant steel-allowing flow rates in excess of 50 MMscf/D at over 150 degree celcius. High flow rates, H2S and carbon dioxide all contributed to the corrosion of the tubing. 

Mud corrosion-drilling mud also plays a key role in corrosion prevention. In addition to its well-known functions, mud must also remain noncorrosive. A greater awareness of corrosion problems has come about by the lower pH of polymer muds. Low pH means more acidic and hence more corrosive. Oil-base muds are usually noncorrosive and, before the introduction of polymer muds, water-base muds were used with relatively high pH of 10 or greater. So when polymer muds were introduced, corrosion from mud became more apparent.  

Completion design also plays an important role in preventing internal corrosion. Reducing sand production by gravel packing prevents sand blasting that causes  erosion corrosion. 

Stimulation programs may, inadvertently, promote internal corrosion. Depending on lithology, highly corrosive hydrochloric acid (HCl) with additions of hydrofluoric (HF) acid are used to improve near-wellbore permeability. These acids can also remove scale buildup on the inside of casing and tubing, allowing direct attack on bare metal.

Wednesday, April 11, 2018

3D Seismic Survey Design

There's more to designing a seismic survey than just choosing sources and receivers and shooting away. To get the best signal at the lowest cost, geophysicist are tapping an arsenal of technology from integration of borehole data to survey simulation in 3D.

The ideal 3D survey serves multiple purposes. Intially, the data may be used to enhance a structural interpretation based on two-dimensional (2D) data, yielding new drilling locations. Later in the life of a field, seismic data may be revisited to answer questions about fine-scale reservoir architecture or fluid contacts, or may be compared with a later monitor survey to infer fluid-front movement.All these stages of interpretation rely on satisfactory processing, which in turn relies on adequate seismic signal to process. The greatest processing in the world cannot fix flawed signal acquisition. 


Elements of a Good Signal

What makes a good seismic signal? Processing specialists list three vital requirements- good signal-to-noise ratio (S/N), high resolving power and adequate spatial coverage of the target. These basic elements form the foundation of survey design.

High S/N means the seismic trace has high amplitudes at times that correspond to reflections, and little or no amplitude at other times. During acquisition, high S/N is achieved by maximizing signal with a seismic source of sufficient power and directivity, and by minimizing noise. Noise can either be generated by the source - shot-generated or coherent noise, sometimes orders of magnitude stronger than deep seismic reflections - or be random. Limitations in the dynamic range of acquisition equipment require that shot-generated noise be minimized with proper source and receiver geometry. Proper geometry avoids spatial aliasing of the signal, attenuates noise and obtains signals that can benefit from subsequent processing. Aliasing is the ambiguity that arises when a signal is sampled less that twice per cycle. Noise and and signal cannot be distinguised when their sampling is aliased.




A common type of coherent noise that can be aliased comes from low-frequency waves trapped near the surface , called surface waves. On land, these are known as ground roll, and create major problems for processors. They pass the receivers at a much slower velocity than the signal, and so need closer receiver spacing to be properly sampled. Planners always try to design surveys so that surface waves do not contaminate the signal. But if this is not possible, the surface waves must be adequately sampled spatially so they can be removed. 

During processing, S/N is enhanced through filters that suppress noise. Coherent noise is reduced by removing temporal and spatial frequencies different from those of the desired signal, if known. Both coherent and random noise are suppressed by stacking - summing traces from a set of source-receiver pairs associated with reflections at a common midpoint, or CMP. The source-receiver spacing is called offset. To be stacked, every CMP set needs a wide and evenly sampled range of offsets to define the reflection travel time curve, known as normal moveout curve. Flattening that curve, called normal moveout correction, will make reflection from different offsets arrive at the time of the zero-offset reflection. They are then summed to produce a stack trace.



A 3D CMP trace is formed by stacking traces from source-receiver pairs whose midpoints share a more or less common position in a rectangular horizontal area defined during planning, called a bin. 

The number of traces stacked is called fold -in 24-fold data every stack trace represents the average of 24 traces. 



Many survey designers use rules of thumb and previous experience from 2D data to choose an optimal fold for certain targets or certain conditions. A fringe -called the fold taper or halo- around the edge of the survey will have partial fold, thus lower S/N, because several of the first and last shots do not reach as many receivers as in the central part of the survey. Getting full fold over the whole target means expanding the survey area beyond the dimensions of the target, sometimes by 100% or more. 

Many experts believe that 3D surveys do not require the level of fold of 2D surveys. This is because 3D processing correctly positions energy coming from outside the plane containing the source and receiver, which in the 2D case would be noise. The density of data in a 3D survey also permits the use of noise reduction processing, which performs better on 3D data than on 2D.




Filtering and stacking go a long way toward reducing noise, but one kind of noise that often remains is caused by multiple reflections, "multiples" for short. Multiples are particularly problematic where there is a high contrast in seismic properties near the surface. Typical multiples are reverberations within a low-velocity zone, such as between the sea surface and sea bottom, or between the earth's surface and the bottom of a layer of unconsolidated rock. Multiples can appear as later arrivals on a seismic section, and are easy to confuse with deep reflections. And because they can have the same charateristics as the desired signal-same frequency content and similiar velocities - they are often difficult to suppress through filtering and stacking. Sometimes they can be removed through other processing techniques, called demultiple processing.







The second characteristic of a good seismic signal is high resolution, or resolving power - the ability to detect reflectors and quantify the strength of the reflection. This is achieved by recording a high bandwidth, or wide range of frequencies. The greater the bandwidth, the greater the resolving power of the seismic wave. A common objective of seismic surveys is to distinguish the top and bottom of the target. The target thickness determines the minimum wavelength required in the survey, generally considered to be four times the thickness. That wavelength is used to calculate the maximum required frequency in the bandwidth -average seismic velocity to the target divided by minimum wavelength equals maximum frequency. 

The minimum frequency is related to the depth of the target. Lower frequencies  can travel deeper. Some seismic sources are designed to emit energy in particular frequency bands, and receivers normally operate over a wider band. Ideally, sources  that operate in the optimum frequency band are selected during survey design. More often, however surveys are shot with whatever equipment is proposed by the lowest bidder.

Another variable influencing resolution is source and receiver depth - on land, the depth of the hole containing the explosive source (receivers are usually on the surface), and at sea, how far below the surface the sources and receivers are towed.

The source-receiver geometry may produce short-path multiples between the sources, receivers, and the earth or sea surface. If the path of the multiple is short enough, the multiple-sometimes called a ghost- will closely trail the direct signal, affecting the signal's frequency content. The two-way travel time of the ghost is associated with a frequency, called the ghost notch, at which signals cancel out. This leaves the seismic record virtually devoid of signal amplitude at the notch frequency. The shorter the distance between the source or receiver and the reflector generating the multiple, the higher the notch frequency. It is important to choose a source and receiver depth that places the notch outside the desired bandwidth. 

On land, short-path multiples can reflect off near-surface layers, making deeper sources preferable. In marine surveys, waves add noise and instability, ncessitating deeper placement of both sources and receivers. 

The third requirement for good seismic data is adequate subsurface coverage. The lateral distance between CMPs at the target is the bin length. To record reflections from a dipping layer involves more distant sources and receivers than reflections from a flat layer, requiring expansion of the survey area-called migration aperture - to obtain full fold over the target.

Transition zones-shallow water areas- have their own problems, and require specialized equipment and creative planning. Transition zones are complex, involving shorelines, river mouths, coral reefs and swamps. They present a sensitive environment and are influenced by ship traffic, commercial fishing and bottom obstructions. Survey planners have to contend with varying water depths, high environmental noise, complex geology, wind, surf and multiple receiver types - often combination of hydrophones and geophones.  





















Evaluation of existing data - 2D seismic lines and results from seismic source test - warned of potential problem areas. Source tests compared single-source dynamite shots to source patterns, and tested several source depths. The test indicated the presence of ghost notches at certain depths, leading to a reduction in signal energy within the desired frequency band of 10 to 60 Hz. The source test also indicated source patterns were ineffective in controlling ground roll in this prospect area. Deployment of the source at 9 m gave a good S/N ratio at 25 to 60 Hz, but produced very high levels of ground roll. Deployment of the source below 40 m gave a good S/N ratio from 10 to 60 hz and low levels of ground roll. However, such deep holes might be unacceptably time-consuming and costly.


Evaluation of existing 2D lines revealed the frequency content that could be expected from seismic data in the area. Evaluation of existing data indicated areas where special care had to be taken to ensure a successful survey. For example, high-velocity beds at the seafloor promised to cause strong multiples, reducing the energy transmitted to deeper layers and leading to strong reverbations in the water layer. 


 Evaluation of existing borehole data offered valuable insight into the transmission properties of the earth layers above the target and the gophysical parameters that could be obtained at the target. Comparison of formation tops inferred from acoustic impedance logs with reflection depths on the two VSPs allowed geophsyicist to differentiate real reflections from multiples. 











 




Wednesday, April 4, 2018

Saturation Monitoring With the RST Reservoir Saturation Tool

The RST Reservoir Saturation Tool combines the logging capabilities of traditional methods for evaluating saturation in a tool slim enough to pass through tubing. Now saturation measurements can be made without killing the well to pull tubing and regardless of the well's salinity.

Determining hydrocarbon and water saturations behind casing plays a major role in reservoir management. Saturation measurements over time are useful for tracking reservoir depletion, planning workover and enhanced recovery strategies , and diagnosing production problems such as water influx and injection water breakthrough.

Traditional methods of evaluating saturation - thermal decay time logging and carbon/oxygen (C/O) logging - are limited to high-salinity and nontubing wells, respectively. The RST Reservoir Saturation Tool overcomes these limitations by combining both methods in a tool slim enough to fit through tubing. The RST tool eliminates the need for killing the well and pulling tubing. This saves money, avoid reinvasion of perforated intervals, and allows the well to be observed under operating conditions. Moreover, it provides a log of the borehole oil fraction, or oil holdup, even in horizontal wells. 

To understand the operation and versatility of the RST tool requires an overview of existing saturation measurements and their physics.



The Saturation Blues


In a newly drilled well, openhole resistivity logs are used to determine water and hydrocarbon saturations. But once the hole is cased, saturation monitoring has to rely on tools such as the TDT Dual-Burst Thermal Decay Time tool or, for C/O logging, the GST Induced Gamma Ray Spectrometry Tool, which can "see" through casing.


The Dual-Burst TDT tool looks at the rate of thermal neutron absroption, described by the capture cross section gamma of the formation, to infer water saturation. A high absorption rate indicates saline water, which contains chlorine, a very efficient, abundant thermal-neutron absorber. A low absorption rate indicates fresh water or hydrocarbon.


The TDT technique provides good saturation measurements when formation water salinity is high, constant and known. But oil production from an increasing number of reservoirs is now maintained by water injection. This reduces or alters formation water salinity, posing a problem for the TDT tool. 

In low-salinity water (less than 35,000 parts per million), the tool cannot accurately differentiate between oil and water, which have similiar neutron capture cross sections.

When the salinity of the formation water is too low or unknown, C/O logging can be used. 
C/O logging measures gamma rays emitted from inelastic neutron scattering to determine relative concentrations of carbon and oxygen in the formation. A high C/O ratio indicates water or gas-bearing formations. 

The major drawback to C/O logging tools has been their large diameters. Producing wells must be killed and production tubing removed to accomodate tools with diameters of nearly 4 in. [10 cm] . In addition, the tools have slow logging speeds and are more sensitive to borehole fluid than formation fluid, which affects the precision of the saturation measurement.





As Easy as RST


The RST tool directly addresses these shortcomings and can perform either C/O or TDT logging. It comes in two diameters - 1 11/16 in. (RST-A) and 2 1/2 in. (RST-B) - and can be combined with other production logging tools. 

Both versions have two gamma ray detectors. In the RST-A tool, both detectors are on the tool axis, separated by neutron and gamma ray shielding. In the RST-B tool, the detectors are offset from the tool axis and shielded to enhance the near detector's borehole sensitivity and the far detector's formation sensitivity. This allows the formation oil saturation and borehole oil holdup to be derived from the same RST-B C/O measurement. 




Locating Bypassed Oil


In early 1992, ARCO drilled and perforated a sidetrack well in area of Prudhoe Bay undergoing waterflooding. Less than six months later, production was 90% water with less than 200 BOPD, as expected. The original perforations extended from X415 to X440 ft. C/O logging measurements were made in the shut-in well with three different tools - the RST tool and two sondes from other service companies.




The RST results confirmed depletion over the perforated interval (Tracks 2 and 3). Effects of the miscible gas flood sweep are apparent throughout the reservoir. The total inelastic count rate ratio of the near and far detectors indicates qualitatively the presenc of gas in the reservoir. In addition, differences between the openhole fluid analysis and the RST fluid analysis were assumed to be gas.


One potential bypassed zone, A, was identified from X280 to X290 ft. A second zone, B, based on the openhole logs and a C/O log from another service company, was proposed from X220 to X230 ft. The RST log shows zone B to contain more gas and water than zone A.


After assessing the openhole logs and the three C/O logs, ARCo decided to perforate zone B. The initial production was 1000 BOPD with a 75% water cut. Production declined to 200 BOPD with more than 95% water cut in a matter of weeks. The decline prompted ARCO to perforate zone A, commingling production from earlier perforations. Production increased to an average of 600 BOPD and the water cut decreased to 90%. Subsequent production logs confirm that zone A is producing oil and gas and zone B is producing all of the water with some oil.


Modes of Operation


Flexibility is a key advantage of the RST tool. It operates in three modes that can be changed in real time while logging:


  • inelasitc-capture mode
  • capture-sigma mode
  • sigma mode

Inelastic-capture mode: The inelastic-capture mode offers C/O measurements for determining saturations when the formation water salinity is unknown, varying or too low for TDT logging. In addition to C/O logging , thermal neutron capture gamma-ray spectra are recorded after the neutron burst. Elemental yields from these spectra provide lithology, porosity and apparent water salinity information.











Thursday, March 29, 2018

Seismic Surveillance for Monitoring Reservoir Changes

Time-lapse images and acoustic listening devices are the latest tools of the trade for tracking reservoir fluid movement. These techniques, done with both active and passive seismic surveys, are presenting new ways of getting more from the reservoir. For years operators have developed sophisticated models of how fluid moves in reservoirs. However, checking and calibrating them against field measurements often meets with limited success. Sample points may be far apart or data hard to interpret. Now seismic surveillance offers high-quality data that might dramatically improve reservoir management. 

Seismic surveillance is a way to "watch" and "listen" to movement of reservoir fluids far from borehole. Knowing how fluid distribution changes over time is important for more effective management decisions. For example, tracking fluid contacts during production can confirm or invalidate flow models and thereby allow the producer to change recovery schedules. Mapping steamflood fronts during enhanced oil recovery (EOR) can point to bypassed zones that may become targets of remedial operations. Mapping hydraulic fractures reveals the local stress field, which can govern permeability anisotropy - vital input for well placement, stimulation treatment and waste disposal. Seismic monitoring sheds light on each of these problems by taking advantage of some traditional and some not-so-traditional techniques.

The methods for watching and listening to movement of reservoir fluids depend on the rate at which the movement takes place. For changes over months or years, as in the case of a moving gas-oil contact (GOC), the method of choice is time-lapse seismic, sometimes called four-dimensional (4D), differential, repeated, seismic. Images from traditional seismic surveys taken before and after production are compared, and the difference attributed to moved fluids. 


For changes taking place over microseconds to minutes -as when fractures are induced or when fluid flows through natural fractures -the technique is to use borehole sensors to localize the cracking noise produced by fluid movement. These sensors record signals from events similiar to tiny earthquakes, and much of the data analysis is borrowed from earthquake technology. 

As reservoirs are exploited, pore fluid undergoes changes in temperature, pressure and composition. For example, enhanced recovery processes such as steam injection increase temperature. Production of any fluid typically lowers fluid pressure. Gas injection and waterflooding mainly change reservoir composition. These fluid changes affect the bulk density and seismic velocity of reservoir layers. Changes in velocity and density combine to affect the amplitude and travel times of reflected waves. Most surface seismic monitoring is based on amplitude changes rather than travel time. The key is to have amplitude changes big enough to be seen between the baseline survey and subsequent surveys.

The expected change in seismic amplitude can be estimated from laboratory data. Most of the amplitude change comes from fluid effects on rock velocity rather than on rock density. Laboratory experiments on fluid-filled rock show that the greatest changes in velocity arise from two different phenomena - introduction of gas into a liquid-filled rock or an increase in temperature of a hydrocarbon-filled rock. Both cause a decrease in seismic velocity. Even a small amount of gas decreases velocity dramatically by making the fluid compressible. At higher temperatures, hydrocarbons become less viscous, reducing overall rigidity. Both effects are prominent at low pressures, such as in shallow, unconsolidated sands.

Monitoring EOR processes such as steamflooding and in-situ combustion using 4D seismic was first performed in the 1980s. The progression of temperature and gas fronts away from injection wells was mapped using seismic amplitude difference sections- a seismic section created by subtracting the baseline survey image from the monitor survey image. Cores taken after treatment provided independent verification on the extent of flooding - the steam had caused additional hardening and cementation.

These studies established the basic technique used today. The baseline and subsequent surveys must be acquired and processed in the same way. However, repeating a survey with the same sources, receivers and positioning is nearly impossible. Additional processing, therefore, is used to correct for these and other undesirable diffirences in the two surveys caused by weather, access or surface structures. To intepret amplitude differences in terms of fluid movements requires a calibration, obtained by matching reflection amplitudes outside the reservoir. Interpreting amplitude changes in terms of reservoir fluid changes is then a visual, qualitative step.

A more quantitative way of using 4D seismic to monitor fluid movement is to include sythetic seismic sections and reservoir fluid flow simulations in the calibration and interpretation. Norsk Hydro is using this technique to track a gas-oil contact during gas injection. Two 3D surveys were acquired 16 months apart in the Oseberg field, Norwegian North Sea. Special care was taken to preserve true amplitudes during processing. The base survey served several purposes : it provided structure for the initial reservoir model and input for further drilling and development decisions as well as acoustic properties of lithologies and fluids. 

Injecting gas into the Oseberg field pushes the GOC deeper and farther from the injection well. This displacement agrees with that shown on surface seismic and with the simulated reservoir fluid flow for the period between the seismic survey. 







Borehole seismic images from the field support the interpretation. Time-lapse verticaal seismic profiles (VSPs) from the same well also show downward movement of GOC.
The sequence of seismic data interpretation, modeling and comparison with reservoir fluid flow models forms a loop, taking advantage of feedback at many stages. Using these steps, Norsk Hydro geophysicst have been able to validate the existing reservoir model and develop a methodology for future monitoring.



 A monitoring technique that is just emerging is comparison of amplitude variation wtih offset (AVO) in repeated surveys. AVO analysis has been shown to be a powerful fluid discriminator. Scientist at Elf Aquintatine have mad some progress in testing this technique for monitoring waterfront movement in the Frigg field, Norway. The Frigg has producing wells at the crest of the structure, but little other than seismic data to describe the rest of the reservoir. Accurate estimates of gas reserves are crucial so that delivery contracts can be honored. 

Real-Time Monitoring

Real-time monitoring is a different kind of seismic surveiilance that goes by three names: microseismic monitoring, acoustic emission monitoring, or passive seismics. Pressure jumps caused by fluid injection, depletion or temperature change are rapidly transmitted to surrounding rocks, causing stress changes. Stresses are released by movement at fractures or zones of weakness through microseismic events - tiny earthquakes. Typically, microseismic events have very small magnitudes , from -6 to -2 on the Richter scale, a logarithmic measure of energy released in a seismic event. Most microseismic events have a million times less energy than the smallest earthquake that can be felt by a human at the Earth's surface, which is +2 to +3. However, some reservoirs have a history of felt seismicity induced by human intervention.

The transient nature of pressure changes requires a different approach to monitoring. Sensor are deployed in boreholes, adapting technology first developed to monitor earthquakes, then for fractures created by hydraulic stimulation of hot, dry crystalline rocks for extraction of geothermal energy. In the case of hydraulic fracturing for wellbore stimulation and waste injection, passive seismics can provide information about orientation and now height and length of induced fractures, depicting fracture containment. With orientation information, well locations can be optimized to take advantage of permeability anisotropy associated with fractures, and waste containment can be assured for regulatory purposes.

 How does microseimic monitoring work? Although the mechanism of hydraulic fracturing is usually understood to be tensile failure , many microseismic experts believe that recorded events are caused by shear failure along fractures. Energy radiates as a compressional (P) wave traveling at the P-wave speed followed by a shear (S)wave traveling at the slower S-wave speed. A conveniently located triaxial borehole receiver records signals that may be analyzed to locate the source of the emissions.

Location is determined by distance and direction from the receiver. Distance to the event can be obtained by knowing the velocites of P waves and S waves and the lag between their arrivals. Direction is known from polarization or particle motion of the P wave, which is along the path connecting the event and the receiver. 

Most microseismic monitoring experiments are conducted in dedicated observation wells near an injection well and away from noisy pumps.

 

 
 

Monday, March 26, 2018

The MDT Tool

In the MDT tool, unwanted fluid is expelled from the tool using the pumpout module. During sampling, the engineer can monitor the resistivity and temperature of fluid in the flowline while pumping it directly into the borehole or into a dump chamber. When fluid quality is judged to be representative of the reservoir, the pump is stopped and pure formation fluid can be diverted to the sample chamber or, if a sample is not required -often the case when formation water or gas is indicated -another zone can be tested. 

To prevent gas from coming out of solution during sampling, pressure is maintained above bubblepoint using throttle valves in the sample chambe controlled by surface software. Maintaining pressure above bubblepoint reduces drawdown, which helps prevent crumbling in soft formations. Excessive drawdown can result in seal failure and hence mud contamination of a sample. Drawdown can also be limited by using a water cushion and choke with the multisample module.

Formations in which seal failures are likely- highly laminated or otherwise heterogeneous formations -or formations that have low permeability can be tested and sampled using the dual-packer module. Instead of a probe and packer to provide a seal, two inflatable packers are used to isolate an interval of about 3 ft of ormation forming a mini drillstem test. The pumpout module is used to inflate the packers and also to expel mud from between the packers before sampling.

 

Friday, March 23, 2018

Downhole Optical Analysis of Formation Fluids

In the past, wireline formation samplers have not been able to see the fluid they were sampling. Downhole optical analysis of fluid before sampling removes the blindfold to reveal oil, water or gas. The sample chamber needs to be opened only when the desired fluid is present. 

Bringing formation fluid samples to the surface for examination was a novel wireline advance when it was introduced in the early 1950s. Run in open hole or cased hole, the Formation Tester (FT) took a sample of formation fluid where analysis of earlier runs of resistivity and porosity logs showed promising zones. The FT consisted of a sealing packer and probe system that could be set against the formation. Once this was set and opened, formation fluid drained into a sample chamber. The entire sampling operation, from set to retract, was monitored using a pressure gauge. The sample chamber was closed only when pressure stopped increasing -implying the chamber was full and at formation pressure. 

The FT's probe and packer could be set only once per trip in the hole. This created a couple of problems. If the formation has low permeability, the sample chamber could take hours to fill, delaying rig operations and increasing the risk of the tool becoming stuck. Sampling in low-permeability formations was therefore often aborted. But sampling also had to be aborted if the seal between packer and borehole wall failed, indicated by a sudden increase in sampling pressure to hydrostatic. The only remedy was to pull out of the hole, redress the tool and try again. The next generation of testers addressed these difficulties.




The RFT Repeat Formation Tester tool, introduced in the second half of the 1970s, allowed an unlimited number of settings or pretests before sampling was attempted. Pretest chambers were used to indicate the permeability and to check for seal failures. During a pretest two small volume chambers opened producing pressure drawdowns. Knowing the amount of drawdown for each chamber gave two estimates of permeability. Once the pretest chambers were filled, formation permeability could also be calculated from the subsequent buildup to formation pressure. Sudden  incrase to hydrostatic pressure during a pretest showed seal failure. Testing the formation first allowed sampling to be carried out in zones where seal failures did not occur and where permeabilities were high enough to allow one of two sample chambers to be filled in a reasonable amount of time.

However, RFT samples suffered important limitations: the sample too often contained a large percentage of mud filtrate and the flowing pressure sometimes dropped below bubblepoint changing the sample characteristics. Even when the sample was formation fluid, it could have been water or gas and of no interest to the oil company. The latest generation formation tester , the MDT Modular Formation Dynamics Tester tool, overcomes these problems. 





Thursday, March 22, 2018

Exploration Technology in an Era of Change



"We've got to be careful how we define economic. Advances in technology make what is uneconomic today economic tommorow. Take deep water development. Five years ago we would have said a water depth of 4000 ft [1220 m] in the Gulf of Mexico is a no-no. Now, no problem. So when we talk about economic elephants, we are often talking about waiting for development technology to catch up to make those elephant economic. And the technology is catching up rapidly."


 "I'd like to offer a dissenting opinion and address the onshore prospects that a smaller company can deal with. I believe that if there are large fields left in the US, they are probably low-resistivity pay and stratigraphic traps that are virtually invisible to conventional technology. A lot of bright people have looked for oil and gas in the US, using mostly 1975-vintage technology. Very few explorationist have been equipped with a scanning electron microscope,modern seismic surveys, and expert petrography and log analysis. Very few know how to apply hydrodynamics or surface geochemistry. This is one of the great opportunities still left for "value-added production" - perhaps not on the scale of finds in Indonesia or Africa, but important nonetheless." 

" We think of multidisciplinary integration as a core competency - we are only as good as our ability to develop options based on our multidisciplinary evaluation of data. Accessing technology, with a capital "T", we do mainly by looking to the outside world. 
We think about exploration Technology in broad terms. We line up our technology under three banners : (1) techniques that reduce finding and development costs, (2) those that shorten the time between discovery and production, and (3) those that improve fluid recovery. For us, 3D seismic plays a role in all three categories and increasingly is routinely integrated with other data. "




" I think the major change for us was the power of integrating geochemical with geological, reservoir geophysical and other kinds of data on the workstation and the linkage of many workstations and data bases. The interpreter or interpreting team has access to a variety of information and modeling software, including balancing geologic sections, basin modeling. This approach requires more teamwork and further integration of staff specialist in the exploration and development process."

"There is another technical challenge that I alluded to earlier: finding all those now-invisible stratigraphic traps. Sequence stratigraphy is a key to some problems. For gas wells, an understanding of overpressure is essential. "







"We bypassed several reservoirs in Nigeria and the US Gulf Coast because they were not recognized on the logs. We discovered them by doing reservoir geochemistry on cores. The methods don't work so well on cuttings, so we are trying to collect more sidewall cores in problematic areas. The geochemistry itself is cheap- about $150 per sample."






Wednesday, March 21, 2018

Paleomagnetics for Logging

Nobody knows exactly why the earth's magnetic field switches polarity, but the fact that it does and for variable a new logging method enabling well-to-well correlations and the potential for absolute age determination in basin core. When rocks are formed, those that are magnetically susceptible record the direction and magnitude of the this remanent magnetism, the primary objective of paleomagnetic research , has now been applied to the borehole.



The earth's magnetic field is believed to be generated by some form of self-exciting dyanamo. This happens within the earth's iron-rich liquid outer core as it spins on its axis. Fluid motion in the liquid part of the core and activity in central solid part, give rise to local perturbations of the earth's field and also lead to variations in pole positions lasting from 1 year to 10^5 years. Even more intriguing is that complete reversals of the magnetic field occur-north pole becomes south pole and vice versa. These geomagnetic polarity reversals take about 5000 years to complete and last from 10^4 to 10^8 years. Nobody knows the cause, but the reversal process does not involve the magnetic pole simply wandering from north to south. The magnetic field strength appears to fade close to zero and then gradually increases in the opposite direction until a complete reversal is achieved.

How do we know all this? Records of Earth's magnetic field strength and direction date back only a few hundred years and do not show reversals. The first proof of a geomagnetic reversal was provided in 1906 by a French physicist, Bernard Brunhes, when he discovered volcanic rocks at Pontfarein in the French Massif Central that were magnitized almost exactly in the opposite direction to the present-day geomagnetic field. This led to the belief that rocks could retain magnetization from previous magnetic fields, a phenomenon called natural remanent magnetism (NRM).

If no remagnetization occurs - remagnetization is a possibility if rocks are reheated, exposed to later magnetic fields or chemically altered- NRM is an imprint of the geomagnetic field existing at the freezing time of lavas or at the deposition time of sedimentary rocks. And, unlike most geological events, the direction of the NRM imprint is the same worldwide. Traditional dating techniques, such as isotopic or biologic methods, and accurate NRM measurements allow comparison of geochronological time scales with polarity reversals. 




Because of the random time distribution of polarity reversals, a sequence of four or five is unique, almost like a bar code.  A borehole reading of this magnetic reversal sequence (MRS) promises a direct correlation with GPTS. Because MRS is measured against depth and GPTS against time, correlation between them infers a sedimentation rate. 

During the formation of a basin, sedimentation rate varies, but the variation is not random and it is strictly independent of changes in magnetic polarity. This means that sedimentation rate must not exceed a limit compatible with the lithology and must not change drastically at each reversal. Hence, the rate can not only be determined, but can also be used to check the quality of match between one MRS and another or between MRS and the GPTS.





Correlations that indicate a fluctuating sedimentation rate may either be incorrect or may indicate unconformities where part of the geological record in the MRS is missing.

Magnetic reversal sequences can also be used to provide well-to-well correlation. In a hypothetical example representing a series of coastal onlap sequences, the main limits of sedimentary bodies have been determined. Accurate time correlations are now possible and cleary show zones where sedimentation is continous and those where unconformities. Combining both sets of data provides a complete sedimentary description of the basin. The relationship between sedimentary bodies is shown to be more complex than originally assumed.

Basics of Paleomagnetism
Natural remanent magnetism is mostly carried by ferromagnetic minerals, such as iron oxides (hematite, magnetite, goethetie) and iron sulfides (pyrrhotite, but not pyrite), that have high magnetic susceptibility, meaning they are easily magnetized in the presence of a magnetic field. Unlike paramagnetic minerals such as clays, which have small positive susceptibility, or diamagnetic minerals such as limestone or sandstone, which have slightly negative susceptibility, ferromagnetic minerals retain some magnetism after the magnetic field is removed.


 



Wednesday, March 14, 2018

Measurements at the Bit: MWD Tools

Measurements-while-drilling technology has moved down the drillstring to enlist the bit itself as a sensor. 






Conventional drilling of high-angle and horizontal wells is like piloting an airplane from the tail rather than the cockpit. Information required to land the well in the target formation is derived from sensors 50 ft or more behind the bit or at the surface. Because these measurements -about well trajectory, drilling efficiency and formation properties -are remote from the bit, crucial drilling decisions are delayed and data may require more complex interpretation. In particular, course corrections are delayed by lag in measurements needed to make steering decisions, resulting in less drainhole in the pay zone. Also, maximum drilling efficiency requires information about mechanical power delivered to the bit, which is inferred from surface measurements, degrading its accuracy. And resistivity measurements from logging-while-drilling (LWD) sensors in drill collars are limited to formation resistivity less than 200 ohm-m.

Despite these limitations, horizontal and high-angle drilling have proved successful, especially in simple geologic settings - uncomplicated layer-cake structure. Nearly, all these  wells start vertically, with a conventional rotary bottomhole assembly (BHA). The drillstring and bit are rotated from the surface either by a rotary table on the derrick floor or a motor in the traveling block, called a topdrive. Drilling this way is called rotary mode. To kick-off from vertical, the rotary assembly is replaced with a steerable motor - usually a positive displacement motor, driven by mud flow, in a housing bent 1 degree to 3 degree. When mud is flowing , the motor rotates the bit, but not the drillstring. This type of drilling is called sliding mode, because the drillstring slides along after the bit, which advances in the direction of the housing below the bend. 



The direction in which the bit is pointing, called toolface, is measured and sent to surface by measurement-while-drilling (MWD) equipment for real-time control of bit orientation. Measurements include azimuth, which is the compass bearing of the bit, and inclination, which is the angle of the bit with respect to vertical. Large changes in direction are made by lifting off bottom and reorienting the bent sub by rotating from surface. Small changes are made by varying weight on bit, which changes the reactive torque of the motor and hece toolface orientation.



Once sufficient inclination has been built, straight or tangent sections can be drilled in several ways. One is with a conventional rotary, or "locked" , assembly, which is rigid enough to allow fast, straight drilling. Small adjustments in inclination can be made by varying weight on bit or rotary speed. Most horizontal sections, and some tangent sections, are drilled with a steerable motor while rotating the drillstring from surface. In this mode, the steerable motor behaves like a rotary BHA, maintaining both azimuth and inclination. 

However, the presence of the steerable motor allows the driller to make course corrections without tripping the drillstring out of the hole. 




Generally, the driller tries to make as much hole as possible using a rotary assembly or a steerable motor in rotary mode. Rotation of the drillstring reduces the risk of getting stuck and allows faster drilling than in sliding mode.



Overcoming limitations in horizontal drilling

Today, the ability to drill horizontally is undisputed. Yet, the efficiency of drilling and steering horizontally is limited by the distance between the bit and measurements. In drilling, for example, one way to define efficiency is the ratio of time spent making hole to the total rig time, including operations such as trips or hole conditioning. In the horizontal section, steering efficiency can be defined as the ratio of the length of the horizontal section in the pay zone to the total length of the horizontal section. How does lag between measurements and the bit limit these efficiencies?


In drilling with a downhole motor in rotary mode, a key limitation on efficiency is how much weight the driller can safely apply to steerable motor. As the driller increases weight , the motor produces more torque, and power is torque times RPM. The more power, the faster the rate of penetration -up to a point. Excess weight may stall and eventually damage the motor, requiring an expensive trip for motor replacement. The goal is to apply as much power as possible, but within the operational limit of the motor. Power is estimated conventionally from surface measurements of mud flow and mud pressure. Motor RPM is roughly proportional to mud flow. Torque is roughly proportional to the increase in the mud pressure when the bit is on the bottom, compared to off bottom. 

Perhaps the greatest limitation in conventional horizontal drilling is in steering efficiency. Wells are conventionally steered "geometrically" - along a path that has been predetermined based on nearby well data and geologic assumptions. Steering is based only on bit direction and inclination data. Gamma ray and resistivity measurements, if present, are made far from the bit and used only retrospectively. This technique is fine, as long as the target is thick, structurally simple and well known. But it is less effective when the target is thin, complex or insufficently known for planning the well trajectory. And increasingly, with advances in three-dimensional seismics, operators are locating more intricate reservoirs and drilling more complex wells. Challenges today include thin beds and complexly folded or faulted reservoirs.

In these settings, sensors in drill collars allow replacement of basic geometric steering with more efficient geologic steering, or "geosteering" - navigation of the bit using real-time information about rock and fluid properties. A North Sea example shows how LWD sensors performed the dual purpose of geosteering and formation evaluation. Using mostly resistivity measurements, the driller geosteered a drainhole along the top of the oil/water contact to avoid gas production. Resistivity modeling from offset wells showed this contact should have a resistivity of about 0.6 ohm-m. When the value dropped, indicating water , the well path was turned up slightly; when resisitivity increased, the well path was dropped slightly.  

In addition to reduced efficiency in drilling and geosteering, a third limitation of conventional horizontal drilling is in formation evaluation while drilling. Logging-while-drilling sensors reach the formation long before wireline measurements, and so generally view it before wellbore degradation, but some invasion has still occured. Rapid invasion, called spurt, may mask true resistivity in some formations. Also, LWD resistivity measurements by the CDR Compensated Dual Resistivity tool are limited to environments favoring induction-type settings -resistive mud (fresh or oil-base mud) and conductive rock.

The solution to these problems -limited efficiency in drilling and geosteering, and limited capabilities of real-time formation evaluation - is relocation of drilling and logging measurement to the bit itself. The system includes two new logging devices : the Geosteering tool , an instrumented steerable downhole mtoor and the RAB Resisitivy-At-the-bit tool, an instrumented stabilizer. Measurements include gamma ray, several types of resistivity including a measurement at the bit, and drilling data such as inclination, bit shocks and motor RPM. 




The technical leap that allows measurements to be made at the bit and below the steerable motor is a wireless telemetry system. This telemetry link sends data from sensors near the bit to the MWD tool up to 200 ft behind the bit, a path that bypasses the intervening drilling tools, such as the steerable motor. The PowerPulse MWD system recodes and then sends data to surface in real time using mud-pulse telemetry at up to 10 bits per second. At surface, data recording, interpretation and tool control are performed by the Wellsite Information System. Control data can be sent from the surface back downhole by varying mud pump flow. 

The geosteering tool enables the driller and geologist to make real-time correction at the bit, detect hydrocarbons at the bit and steer the borehole for increased reservoir exposure. Both tools measure gamma ray, resistivity using the bit as electrode, and "azimuthal" resisitivy - focused at a narrow angle along the borehole wall. 

Resistivity at the bit is measured by attaching the Geosteering or RAB tool directly to the bit and driving an alternating electric current down the collar, out through the bit and into the formation. The current returns to the drillpipe and drill collars above the transmitter. In water-base mud, returning current is conducted from the bit through the mud, into the formation and back to the BHA. In oil-based mud, which is an insulator, current returns through the inevitable but intermittent contact of the collars and stabilizers with the borehole wall, leading to a qualitative indication of resistivity. Formation resisitivy is obtained by measuring the amount of current flowing into the formation from the bit, and normalizing it to the transmitter voltage. 






Azimuthal resistivity is measured from one or more button electrodes and , like the azimuthal gamma ray measurement, can be used to steer the bit. Both tools can be oriented in multiple directions to find the location of a lithologic or pore fluid boundary relative to the borehole -up, down, left or right - and thereby steer the bit. 

Surface Control for Measurements at the Bit

Because the Geosteering tool is an instrumented steerable motor, it enables the driller to steer the bit on a geometric or geologic path through the pay zone.  The driller's window into the bit is the Wellsite Information System, which includes a display for checking and revising the structural and stratigraphic model, and updating the drilling trajectory. This screen is intended mainly for real-time management of horizontal drilling. 

 Resolution of both Geosteering tool and RAB Resistivity measurements is sufficient for hydrocarbon detection and lithologic correlation. The multiple depths of investigation and high resolution of the focused RAB measurements also provide formation evaluation-quality information. Applications include prompt location of coring and casing points, and monitoring of invasion by logging after drilling.