Oil well production analysis

Oil well production analysis started with empirical relationships exactly where Well Test Interpretation stopped. In the 1920’ PA started with Arnold and Cutler, Who implemented empirical relations for economic purpose but with no physical relation to actual reservoir engineering. The objective was more or less to find the right scale, draw a straight line and extrapolate.

Things improved marginally with Arps in the 1940’s, with the formulation of constant pressure exponential, hyperbolic and harmonic decline responses: The first log-log, well test style type-curves came with Fetkovich in the 1970’s, still assuming constant flowing pressure at a time where the well test community was moving towards superposition/convolution of the flow rates.

Fetkovich decline in oil well production analysis
FETKOVICH DECLINE TYPE-CURVES

The superposition and derivative came ten years later, with the work of Blasingame et al. When a new presentation was proposed, with normalized rate pressure instead of normalized pressure rate values: At this stage, Production analysis had, in theory, caught up with PT methodology. In reality, day to day Production analysis remained, until recently, constrained to the “old” tools implemented to serve production databases. Basically, look at the rates, not at the pressures, hence missing the corrective factor to perform a rigorous diagnostic of the data. When forward thinking people wanted to use both pressure and rate for analysis of the production responses, they would enter the data in a well test analysis package. But this approach had inherent errors as assumptions made in well test interpretation are not necessarily valid or appropriate over production time scales.

The move to modern Production analysis and corresponding commercial is recent. It came from the dual requirements of performing classic decline analysis on a personal computer (PC), and permanent surface and downhole pressure gauges, making real analysis using both production and pressure data.

Permanent downhole gauges in Oil well production analysis

With the increasingly frequent installation and use of permanent downhole gauges (PDG) and other measuring instruments we are receiving data at a high acquisition rate and over a long time interval. Put crudely, If we multiply high frequency by long duration we get a huge number of data points; typically 20 million,

TYPICAL PERMANENT DOWNHOLE GAUGE

not sometimes up to 300 million. Conversely, the number of data points needed for an analysis is much less. They are of two types:

  1. Low frequency data for production analysis and history matching. If rates are acquired daily, a pressure point per hour will do. This means less than 100,000 points for ten years.
  2. High frequency data or Pressure Transient Analysis. Assuming 100 build-ups wih 1000 points extracted on a logarithmic time scale for analysis coincidentally this is another, albeit different, 100,000 points.

So even for the largest data sets, 200,000points are plenty to cope with the required processing, for instance, two orders of magnitude less than the median size of the actual raw data set, unlike the size of the raw data, 200,000 points is well within the processing capability of today’s PC. But we need some smart filtering algorithms to obtain these points.

Hola

Header 2 – Advertisement

Sé el primero en comentar

Dejar una contestacion

Tu dirección de correo electrónico no será publicada.


*