Wednesday, April 25, 2018

Corrosion in the Oil Industry

Most metals exist in nature as stable ores of oxides, carbonates or sulfides. Refining them, to make them useful, requires energy. Corrosion is simply nature's way of reversing an unnatural process back to a lower energy state. Preventing corrosion is vital in every step in the production of oil and gas. Corrosion cost US industries alone an estimated $170 billion a year. The oil industry, with its complex and demanding production techniques, and the environmental threat should components fail, takes an above average share of these costs.



Corrosion - the deterioration of a metal or its properties -attacks every component at every stage in the life of every oil and gas field. From casing strings to production platforms, from drilling through to abandonment, corrosion is an adversary worthy of all the high technology and research we can throw at it.

Oxygen, which plays such an important role in corrosion, is not normally present in producing formations. It is only at the drilling stage that oxgyen-contaminated fluids are first introduced. Drilling muds, left untreated, will corrode not only well casing, but also drilling equipment, pipelines and mud handling equipment. Water and carbon dioxide-produced or injected for secondary recovery-can cause severe corrosion of completion strings.  Acid-used to reduce formation damage around the well or to remove scale-readily attacks metal. Completions and surface pipelines can be eroded away by high production velocities or blasted by formation sand. Hydrogen sulfied [H2S] poses other problems. Handling all these corrosion situations, with the added complications of high temperatures, pressures and stresses involved in drilling or production, requires the expertise of a corrosion engineer, an increasingly key figure in the industry. 

Because it is almost impossible to prevent corrosion, it is becoming more apparent that controlling the corrosion reate may be the most economical solution. Corrosion engineers are therefore increasingly involved in estimating the cost of their solutions to corrosion prevention and estimating the useful life of equipment.  

Production wells were completed using 7-in. L-80 grade carbon steel tubing-an H2S-resistant steel-allowing flow rates in excess of 50 MMscf/D at over 150 degree celcius. High flow rates, H2S and carbon dioxide all contributed to the corrosion of the tubing. 

Mud corrosion-drilling mud also plays a key role in corrosion prevention. In addition to its well-known functions, mud must also remain noncorrosive. A greater awareness of corrosion problems has come about by the lower pH of polymer muds. Low pH means more acidic and hence more corrosive. Oil-base muds are usually noncorrosive and, before the introduction of polymer muds, water-base muds were used with relatively high pH of 10 or greater. So when polymer muds were introduced, corrosion from mud became more apparent.  

Completion design also plays an important role in preventing internal corrosion. Reducing sand production by gravel packing prevents sand blasting that causes  erosion corrosion. 

Stimulation programs may, inadvertently, promote internal corrosion. Depending on lithology, highly corrosive hydrochloric acid (HCl) with additions of hydrofluoric (HF) acid are used to improve near-wellbore permeability. These acids can also remove scale buildup on the inside of casing and tubing, allowing direct attack on bare metal.

Wednesday, April 11, 2018

3D Seismic Survey Design

There's more to designing a seismic survey than just choosing sources and receivers and shooting away. To get the best signal at the lowest cost, geophysicist are tapping an arsenal of technology from integration of borehole data to survey simulation in 3D.

The ideal 3D survey serves multiple purposes. Intially, the data may be used to enhance a structural interpretation based on two-dimensional (2D) data, yielding new drilling locations. Later in the life of a field, seismic data may be revisited to answer questions about fine-scale reservoir architecture or fluid contacts, or may be compared with a later monitor survey to infer fluid-front movement.All these stages of interpretation rely on satisfactory processing, which in turn relies on adequate seismic signal to process. The greatest processing in the world cannot fix flawed signal acquisition. 


Elements of a Good Signal

What makes a good seismic signal? Processing specialists list three vital requirements- good signal-to-noise ratio (S/N), high resolving power and adequate spatial coverage of the target. These basic elements form the foundation of survey design.

High S/N means the seismic trace has high amplitudes at times that correspond to reflections, and little or no amplitude at other times. During acquisition, high S/N is achieved by maximizing signal with a seismic source of sufficient power and directivity, and by minimizing noise. Noise can either be generated by the source - shot-generated or coherent noise, sometimes orders of magnitude stronger than deep seismic reflections - or be random. Limitations in the dynamic range of acquisition equipment require that shot-generated noise be minimized with proper source and receiver geometry. Proper geometry avoids spatial aliasing of the signal, attenuates noise and obtains signals that can benefit from subsequent processing. Aliasing is the ambiguity that arises when a signal is sampled less that twice per cycle. Noise and and signal cannot be distinguised when their sampling is aliased.




A common type of coherent noise that can be aliased comes from low-frequency waves trapped near the surface , called surface waves. On land, these are known as ground roll, and create major problems for processors. They pass the receivers at a much slower velocity than the signal, and so need closer receiver spacing to be properly sampled. Planners always try to design surveys so that surface waves do not contaminate the signal. But if this is not possible, the surface waves must be adequately sampled spatially so they can be removed. 

During processing, S/N is enhanced through filters that suppress noise. Coherent noise is reduced by removing temporal and spatial frequencies different from those of the desired signal, if known. Both coherent and random noise are suppressed by stacking - summing traces from a set of source-receiver pairs associated with reflections at a common midpoint, or CMP. The source-receiver spacing is called offset. To be stacked, every CMP set needs a wide and evenly sampled range of offsets to define the reflection travel time curve, known as normal moveout curve. Flattening that curve, called normal moveout correction, will make reflection from different offsets arrive at the time of the zero-offset reflection. They are then summed to produce a stack trace.



A 3D CMP trace is formed by stacking traces from source-receiver pairs whose midpoints share a more or less common position in a rectangular horizontal area defined during planning, called a bin. 

The number of traces stacked is called fold -in 24-fold data every stack trace represents the average of 24 traces. 



Many survey designers use rules of thumb and previous experience from 2D data to choose an optimal fold for certain targets or certain conditions. A fringe -called the fold taper or halo- around the edge of the survey will have partial fold, thus lower S/N, because several of the first and last shots do not reach as many receivers as in the central part of the survey. Getting full fold over the whole target means expanding the survey area beyond the dimensions of the target, sometimes by 100% or more. 

Many experts believe that 3D surveys do not require the level of fold of 2D surveys. This is because 3D processing correctly positions energy coming from outside the plane containing the source and receiver, which in the 2D case would be noise. The density of data in a 3D survey also permits the use of noise reduction processing, which performs better on 3D data than on 2D.




Filtering and stacking go a long way toward reducing noise, but one kind of noise that often remains is caused by multiple reflections, "multiples" for short. Multiples are particularly problematic where there is a high contrast in seismic properties near the surface. Typical multiples are reverberations within a low-velocity zone, such as between the sea surface and sea bottom, or between the earth's surface and the bottom of a layer of unconsolidated rock. Multiples can appear as later arrivals on a seismic section, and are easy to confuse with deep reflections. And because they can have the same charateristics as the desired signal-same frequency content and similiar velocities - they are often difficult to suppress through filtering and stacking. Sometimes they can be removed through other processing techniques, called demultiple processing.







The second characteristic of a good seismic signal is high resolution, or resolving power - the ability to detect reflectors and quantify the strength of the reflection. This is achieved by recording a high bandwidth, or wide range of frequencies. The greater the bandwidth, the greater the resolving power of the seismic wave. A common objective of seismic surveys is to distinguish the top and bottom of the target. The target thickness determines the minimum wavelength required in the survey, generally considered to be four times the thickness. That wavelength is used to calculate the maximum required frequency in the bandwidth -average seismic velocity to the target divided by minimum wavelength equals maximum frequency. 

The minimum frequency is related to the depth of the target. Lower frequencies  can travel deeper. Some seismic sources are designed to emit energy in particular frequency bands, and receivers normally operate over a wider band. Ideally, sources  that operate in the optimum frequency band are selected during survey design. More often, however surveys are shot with whatever equipment is proposed by the lowest bidder.

Another variable influencing resolution is source and receiver depth - on land, the depth of the hole containing the explosive source (receivers are usually on the surface), and at sea, how far below the surface the sources and receivers are towed.

The source-receiver geometry may produce short-path multiples between the sources, receivers, and the earth or sea surface. If the path of the multiple is short enough, the multiple-sometimes called a ghost- will closely trail the direct signal, affecting the signal's frequency content. The two-way travel time of the ghost is associated with a frequency, called the ghost notch, at which signals cancel out. This leaves the seismic record virtually devoid of signal amplitude at the notch frequency. The shorter the distance between the source or receiver and the reflector generating the multiple, the higher the notch frequency. It is important to choose a source and receiver depth that places the notch outside the desired bandwidth. 

On land, short-path multiples can reflect off near-surface layers, making deeper sources preferable. In marine surveys, waves add noise and instability, ncessitating deeper placement of both sources and receivers. 

The third requirement for good seismic data is adequate subsurface coverage. The lateral distance between CMPs at the target is the bin length. To record reflections from a dipping layer involves more distant sources and receivers than reflections from a flat layer, requiring expansion of the survey area-called migration aperture - to obtain full fold over the target.

Transition zones-shallow water areas- have their own problems, and require specialized equipment and creative planning. Transition zones are complex, involving shorelines, river mouths, coral reefs and swamps. They present a sensitive environment and are influenced by ship traffic, commercial fishing and bottom obstructions. Survey planners have to contend with varying water depths, high environmental noise, complex geology, wind, surf and multiple receiver types - often combination of hydrophones and geophones.  





















Evaluation of existing data - 2D seismic lines and results from seismic source test - warned of potential problem areas. Source tests compared single-source dynamite shots to source patterns, and tested several source depths. The test indicated the presence of ghost notches at certain depths, leading to a reduction in signal energy within the desired frequency band of 10 to 60 Hz. The source test also indicated source patterns were ineffective in controlling ground roll in this prospect area. Deployment of the source at 9 m gave a good S/N ratio at 25 to 60 Hz, but produced very high levels of ground roll. Deployment of the source below 40 m gave a good S/N ratio from 10 to 60 hz and low levels of ground roll. However, such deep holes might be unacceptably time-consuming and costly.


Evaluation of existing 2D lines revealed the frequency content that could be expected from seismic data in the area. Evaluation of existing data indicated areas where special care had to be taken to ensure a successful survey. For example, high-velocity beds at the seafloor promised to cause strong multiples, reducing the energy transmitted to deeper layers and leading to strong reverbations in the water layer. 


 Evaluation of existing borehole data offered valuable insight into the transmission properties of the earth layers above the target and the gophysical parameters that could be obtained at the target. Comparison of formation tops inferred from acoustic impedance logs with reflection depths on the two VSPs allowed geophsyicist to differentiate real reflections from multiples. 











 




Wednesday, April 4, 2018

Saturation Monitoring With the RST Reservoir Saturation Tool

The RST Reservoir Saturation Tool combines the logging capabilities of traditional methods for evaluating saturation in a tool slim enough to pass through tubing. Now saturation measurements can be made without killing the well to pull tubing and regardless of the well's salinity.

Determining hydrocarbon and water saturations behind casing plays a major role in reservoir management. Saturation measurements over time are useful for tracking reservoir depletion, planning workover and enhanced recovery strategies , and diagnosing production problems such as water influx and injection water breakthrough.

Traditional methods of evaluating saturation - thermal decay time logging and carbon/oxygen (C/O) logging - are limited to high-salinity and nontubing wells, respectively. The RST Reservoir Saturation Tool overcomes these limitations by combining both methods in a tool slim enough to fit through tubing. The RST tool eliminates the need for killing the well and pulling tubing. This saves money, avoid reinvasion of perforated intervals, and allows the well to be observed under operating conditions. Moreover, it provides a log of the borehole oil fraction, or oil holdup, even in horizontal wells. 

To understand the operation and versatility of the RST tool requires an overview of existing saturation measurements and their physics.



The Saturation Blues


In a newly drilled well, openhole resistivity logs are used to determine water and hydrocarbon saturations. But once the hole is cased, saturation monitoring has to rely on tools such as the TDT Dual-Burst Thermal Decay Time tool or, for C/O logging, the GST Induced Gamma Ray Spectrometry Tool, which can "see" through casing.


The Dual-Burst TDT tool looks at the rate of thermal neutron absroption, described by the capture cross section gamma of the formation, to infer water saturation. A high absorption rate indicates saline water, which contains chlorine, a very efficient, abundant thermal-neutron absorber. A low absorption rate indicates fresh water or hydrocarbon.


The TDT technique provides good saturation measurements when formation water salinity is high, constant and known. But oil production from an increasing number of reservoirs is now maintained by water injection. This reduces or alters formation water salinity, posing a problem for the TDT tool. 

In low-salinity water (less than 35,000 parts per million), the tool cannot accurately differentiate between oil and water, which have similiar neutron capture cross sections.

When the salinity of the formation water is too low or unknown, C/O logging can be used. 
C/O logging measures gamma rays emitted from inelastic neutron scattering to determine relative concentrations of carbon and oxygen in the formation. A high C/O ratio indicates water or gas-bearing formations. 

The major drawback to C/O logging tools has been their large diameters. Producing wells must be killed and production tubing removed to accomodate tools with diameters of nearly 4 in. [10 cm] . In addition, the tools have slow logging speeds and are more sensitive to borehole fluid than formation fluid, which affects the precision of the saturation measurement.





As Easy as RST


The RST tool directly addresses these shortcomings and can perform either C/O or TDT logging. It comes in two diameters - 1 11/16 in. (RST-A) and 2 1/2 in. (RST-B) - and can be combined with other production logging tools. 

Both versions have two gamma ray detectors. In the RST-A tool, both detectors are on the tool axis, separated by neutron and gamma ray shielding. In the RST-B tool, the detectors are offset from the tool axis and shielded to enhance the near detector's borehole sensitivity and the far detector's formation sensitivity. This allows the formation oil saturation and borehole oil holdup to be derived from the same RST-B C/O measurement. 




Locating Bypassed Oil


In early 1992, ARCO drilled and perforated a sidetrack well in area of Prudhoe Bay undergoing waterflooding. Less than six months later, production was 90% water with less than 200 BOPD, as expected. The original perforations extended from X415 to X440 ft. C/O logging measurements were made in the shut-in well with three different tools - the RST tool and two sondes from other service companies.




The RST results confirmed depletion over the perforated interval (Tracks 2 and 3). Effects of the miscible gas flood sweep are apparent throughout the reservoir. The total inelastic count rate ratio of the near and far detectors indicates qualitatively the presenc of gas in the reservoir. In addition, differences between the openhole fluid analysis and the RST fluid analysis were assumed to be gas.


One potential bypassed zone, A, was identified from X280 to X290 ft. A second zone, B, based on the openhole logs and a C/O log from another service company, was proposed from X220 to X230 ft. The RST log shows zone B to contain more gas and water than zone A.


After assessing the openhole logs and the three C/O logs, ARCo decided to perforate zone B. The initial production was 1000 BOPD with a 75% water cut. Production declined to 200 BOPD with more than 95% water cut in a matter of weeks. The decline prompted ARCO to perforate zone A, commingling production from earlier perforations. Production increased to an average of 600 BOPD and the water cut decreased to 90%. Subsequent production logs confirm that zone A is producing oil and gas and zone B is producing all of the water with some oil.


Modes of Operation


Flexibility is a key advantage of the RST tool. It operates in three modes that can be changed in real time while logging:


  • inelasitc-capture mode
  • capture-sigma mode
  • sigma mode

Inelastic-capture mode: The inelastic-capture mode offers C/O measurements for determining saturations when the formation water salinity is unknown, varying or too low for TDT logging. In addition to C/O logging , thermal neutron capture gamma-ray spectra are recorded after the neutron burst. Elemental yields from these spectra provide lithology, porosity and apparent water salinity information.