Parallel processing has allowed for an increase in computing power for reservoir simulation. This involves distributing the simulation calculation across multiple processors that perform parts of the computation simultaneously. A bank of processors is shown to perform different parts of the problem at once. Using more parallel processors can make the problem solve faster in a linear way, called being scalable. Finer scale calculations can now be done using megacell simulation, which captures oil lengths that coarser simulations would miss.
Parallel processing has allowed for an increase in computing power for reservoir simulation. This involves distributing the simulation calculation across multiple processors that perform parts of the computation simultaneously. A bank of processors is shown to perform different parts of the problem at once. Using more parallel processors can make the problem solve faster in a linear way, called being scalable. Finer scale calculations can now be done using megacell simulation, which captures oil lengths that coarser simulations would miss.
Parallel processing has allowed for an increase in computing power for reservoir simulation. This involves distributing the simulation calculation across multiple processors that perform parts of the computation simultaneously. A bank of processors is shown to perform different parts of the problem at once. Using more parallel processors can make the problem solve faster in a linear way, called being scalable. Finer scale calculations can now be done using megacell simulation, which captures oil lengths that coarser simulations would miss.
b) Parallel Processing: Part of the increase in computing power referred to
above is the growth of parallel processing in reservoir simulation. The
central idea here is to distribute the simulation calculation around a number of processors ( or “nodes”) which perform different parts of the computational problem simultaneously. A bank of such processors is shown in Figure 11 (from the work of Dogru, SPE57907, 2000). The general impact of parallel simulation is shown according to Dogru (2000) in Figure 12. If the problem gets linearly faster with the number of parallel processors, then it is said to be “scalable” and the closeness to an ideal line is a measure of how well the process “parallelises” (reaches the ideal scaling line); an example is shown in Figure 13. Finally, the type of fine scale calculation that can now be performed using megacell simulation is shown in Figure 14 where it is shown that there is a lengthscale of remaining oil that is missed in the coarser (but still quite fine) simulation. A table of what types of calculation can be performed and some timings for these is also included (although these numbers will probably be out of date very quickly!). For further details, see Megacell Reservoir Simulation - A.H. Dogru - SPE Distinguished Author Series, SPE57907, 2000 and the references therein.
Discussion of Changes in Reservoir Simulation; 1970s - 2000
From the above field examples (Cases 1 -3), there is clearly a progression in the engineering approach, the degree of reservoir description and the computational capabilities as we go from reservoir simulation in the late 1970s to the present time. Petroleum Engineering Reservoir Simulation Institute of Petroleum Engineering, Heriot-Watt University 37 The main changes are as follows: Computer power: There has been a vast increase in computer processing power over this period because of : (a) CPU: The growth of powerful CPU (central processing units - i.e. chips) especially as implemented in Unix machines (workstations) and RISC technology and more recently by the development of modern PCs. The corresponding cost of computing has fallen dramatically. A graph of processing power (Mflops/s) vs. time and a corresponding graph of maximum practical model size vs. time is shown in Figure 10: